OSSEC / Auto-OSSEC Automation in AWS Linux – More GLUE!

 

OSSEC is a tricky devil to automate. And what I mean by automate; is install the ossec-hids-server, install the ossec-hids agent, register the agent and have the server recognize that registration without human prompts. If you’ve done this before, you know there are lots of manual steps. The smart folks over at BinaryDefense have added some automation to that process with their auto-ossec tool

They really took a lot of work out of all of the manual steps needed to connect the client to the server, generate the key and exchange the key…

but… the process was still not as automated as I needed it to be. In AWS you don’t know what the OSSEC server IP will be, and that IP needs to be passed to auto-ossec as an arguement +  placed in the ossec-hids-agent config file.  Not to mention all of the repo adds, tweaks to ossec config files that must happen even for ossec to start properly.

I have written two scripts, located in my git repository,  that automates the installation of the remaining pieces that auto-ossec does not; that is outfitted for AWS Linux.

The LinuxOssecServer script installs ossec-hids-server and binarydefense auto-ossec listener on the AWS Linux Ec2 instance that will be in the role of Ossec Server.

We leverage S3 as a storehouse for needed files:
The atomix file/ script that you run to install the ossec repositories: https://updates.atomicorp.com/installers/atomic would go in s3://yourbucketname.

Also, a clone of binary defense repo https://github.com/binarydefense/auto-ossec would go in s3://yourbucketname.

You need to allow your EC2 instance access to S3 and to query other instances, so EC2 instance Role required for access to S3 and EC2.

The LinuxOssecClient script installs ossec-hids-agent and and binarydefense auto-ossec and then automatically locates the AWS EC2 instance ossec-server ip (via a pre-set tag) and registers the agent and starts services on AWS Linux. Same requirements as above for the fole.

The line with ‘aws ec2 describe-instances’ must have correct region, so put your region in there. For the public version of the code,  ossec server must have AWS tag of Name=tag:Role,Values=OssecMaster for script to locate the IP addr of the EC2 instance that is the ossec server, so when you start your OssecServer instance, be sure to add that tag.

You’ll notice some sleep commands I’ve put in the scripts. OSSEC initialization is a little buggy, meaning, [ see ref links 1 and 2 below ]  that you have to restart the ossec-hids-server process on the server after the first agent attempts to register; once that is done, all the subsequent agents will register with no problem. I don’t know why this is and this behavior is lame – and I hated to have to code around it.  I need to come up with a better way that just sleeping the script during the first agent registrations; and then running a restart after x minutes.  Or maybe the next version of OSSEC will fix this so the first agent will register without a restart.

Ref .1  Issue where you have to restart OSSEC after first agent registers

Ref 2 Issue where you have to restart OSSEC after first agent registers

Also, don’t forget to configure your Security Groups correctly.

You’ll need  9654 TCP open on the OSSEC server for the auto-ossec listener

You’ll need 1514 UDP open on the OSSEC server to accept agent keep alive messages.

 

Posted in Cloud Security, Cyber Security, Linux Security | Leave a comment

Path to AWS Architect Professional – Storage Anti-Patterns

 

This post a summary on my notes from reading the Storage Design Anti-Patterns addressed in this AWS Whitepaper.  

“An anti-pattern is a common response to a recurring problem that is usually ineffective and risks being highly counterproductive”

S3 Anti-Patterns: 

Amazon S3 doesn’t suit all storage situations. The following list presents somestorage needs for which you should consider other AWS storage options

Storage Need:  File-System. S3 uses a flat name space and is not meant to be a POSIX-compliant file system.  Instead, consider Amazon EFS as for a File System.

Storage Need:  Structured Data with Query: S3 does not offer query for specific objects. When you use S3, you need to know the bucketname and key for the files you want to retrieve. Instead use / or pair S3 with: AWS Dynamo DB, Amazon RDS or  CloudSearch

Storage Need:  Rapidly Changing data: Use solutions that take read and write latencies into account, such s Amazon EFS , AWS Dynamo DB, Amazon RDS, or Amazon EBS

Storage Need:  Archival data: For data that requires infrequent read access with encrypted archival storage with a long RTO is ideal for Amazon Glacier

Storage Need:  Dynamic Website hosting: Although S3 is ideal for hosting static content, dynamic websites that depend on server-side scripting or database interaction are more ideal for Amazon EC2 or Amazon EFS

Glacier Anti-Patterns: 

Amazon Glacier doesn’t suit all storage situations. The following list presents some storage needs for which you should consider other AWS storage options.

Storage Need:  Rapidly Changing data: Look for a stroage solution with lower read and write latencies such as Amazon RDS, Amazon EFSAWS Dynamo DB,  or DBs running on Amazon EC2

Storage Need:  Immediate Access Data store in Glacier is not available immediately, typically takes 3-5 hours, so if you need to access your data immediately, Amazon S3 is a better choice .

Amazon EFS Anti-Patterns: Amazon EFS doesn’t suit all storage situations. The following list presents some storage needs for which you should consider other AWS storage options

Storage Need:  Archival data: For data that requires infrequent read access with encrypted archival storage with a long RTO is ideal for Amazon Glacier

Storage Need:  Relational Database Storage: In most cases, relational databases require storage that is mounted, accessed, and locked by a single node (EC2 instance, etc.).. Instead use: AWS Dynamo DB, Amazon RDS

Storage Need:  Temporary Storage: Consider using local instance store for items like buffers, cache, queues and caches.

Amazon EBS Anti-Patterns:

Amazon EBS doesn’t suit all storage situations. The following list presents some
storage needs for which you should consider other AWS storage options.

Storage Need:  Temporary Storage: Consider using local instance store for items like buffers, cache, queues and caches.

Storage Need:  Multi-instance Storage: EBS volumes can only be attached to one EC2 instance at a time.  If you need multiple instances attached to a single data store, consider using Amazon EFS

Storage Need:  Highly Durable Storage: Instead use Amazon S3 or Amazon EFS. Amazon S3 Standard storage is designed for 99.999999999 percent (11 nines) annual durability per object. You can take a snapshot of EBS and that snapshot gets saved to S3, thereby providing the durability of S3.  Alternatively, Amazon EFS  is designed for high durability and high availability, with data stored in multiple Availability Zones within an AWS Region.

Storage Need:  Static Data or Web Content: If data is more static, Amazon S3 might represent a more cost-effective and scalable solution for storing  fixed information. Web content served out of Amazon EBS requires a web server running on Amazon EC2; in contrast, you can deliver web content directly out of Amazon S3 or from multiple EC2 instances using Amazon EFS.

Amazon EC2 Instance Store Anti-Patterns:

Amazon EC2 instance store doesn’t suit all storage situations. The following list presents some storage needs for which you should consider other AWS storage options.

Storage Need:  Persistent StorageIf you need disks that a similar to a disk drive and must persist beyond the life of the instance,  EBS Volumes, EFS file systems or S3 are more appropriate

Storage Need:  Relational Database Storage: In most cases, relational databases require storage that is mounted, accessed, and locked by a single node (EC2 instance, etc.).. Instead use: AWS Dynamo DB, Amazon RDS

Storage Need:  Shared Storage: Instance store can only be attached to one EC2 instance at a time.  If you need multiple instances attached to a single data store, consider using Amazon EFS   If you need storage that can be detached from one instance and attached to a different instance, or if you need the ability to share data easily, Amazon EFS, Amazon S3, or Amazon EBS are better choices.

Storage Need:  Snapshots: If you need, long-term durability, availability,
and the ability to share point-in-time disk snapshots, EBS volumes with snapshots stored in S3  are a better choice

Posted in AWS, AWS Certified Solutions Architect, Uncategorized | Leave a comment

Path to AWS Architect Professional – Which DB to use? re:Invent Notes

AWS reInvent vids on youtube are a goldmine for knowledge seekers. What follows are my notes from the AWS ReInvent 2017 ‘Which Database to use when’ Presentation I am using these vids to study for my AWS Architect Professional exam coming up at the end of May. I hope my notes help you, too.

Amazon Database Philosophy:  Purpose build Database to satisfy particular workload for best price, programability, cost and performance for customers.

Self Managed Database: You have full responsibility for upgrades and backup. You have full responsibility for Security. Full control over parameters of server and DB. Replication is expensive and requires significant Engineering.

          VS

AWS Managed Database: AWS provides upgrades, backup and fail-over as a service. AS provides infrastructure security, certification and tools for security. DB is a managed appliance so you can easily automate. AWS provides fail-over as a packaged service. You can leverage API calls to DB / S3. “Everything is at the end of an API call.”

Generalities -what are you doing with you DB?

Operational [transactional, system of record, content management ]

Usually a good fit for caching * Small Compute sizes, few rows, items documents per request. High-throughput, high concurrency. Mission critical, HA DR data protection. Size at limit, bounded or unbounded?

Things to consider: Size at limit, bounded or unbounded. Rows, key values or documents. Need relational capabilities, Push down compute to DB? Change velocity(insert only workload vs. update workload). Ingestion requirements.

Relational Store, are really good if you need Referential integrity, strong consistency. transactions and hardened scale. Complex query support with SQL.

Key-Value Store, low latency GET and PUT.  High throughput, partition-able, fast ingestion of data. Simply Query methods with Filters.

Document Store. Indexing and storing any document with support on querying any property. Simply query with filters, projections and aggregates.

Graph Store.  Creating and navigating relations between data easily and quickly. Easily express queries in terms of relations.

RDS

RDS is a great general purpose DB, start very small and grow with business. When you are building an application with common frameworks like Ruby on Rails, Jenjo, etc.., and you can choose particular engine based on skills on your team, ( python skill and PostgreSQL ) app requirements. Bringing apps in the cloud, people start with RDS, SQL server for bringing in IIS or .net – get out of business of DB management and focus on application. Aurora vs EBS.

Aurora is a fantastic storage env because its always Multi-AZ. For massive scale relational workloads – chose Aurora, then choose features based on application. RDS is bounded at limit. You provision storage, grow with it – but ultimately limits. Aurora offers encryption and rest and in transit. 5x better performance than SQL, cost efficient.

Dynamo DB.

Break out of limitations with relational DBs. Gives ability to operate efficiently with much more scale. You can mix and match Dynamo with RDS.  Putting shopping cart with high availability, high throughput, put that on DynamoDB. If you have data with huge amounts of “push down” computation, you may put that in RDS. Data migration service moves data between sources. DyanmoDB UpdateStreams listens for changes and updates data. Partitioned.

DAX for caching – gives unbounded low latency – Application implemented on DyanmoDB with DAX in front, where we don’t manage cache management – DAX is “write-through cache” massive acceloration in performance.

Elasticache. You add cache on and use an operational store. Application takes responsibility for making cache and data consistent. Memcache and Redis interfaces -for you have an open approach.

If your data is bigger than the cache size, and you are missing hits on the cache, then cache does no good. Not everything is cacheable.

Neptune – Graph DB. Store billions of relations and query with milliseconds latency. Six replicas of data across three AZs.  Build queries with Gremlin SPARQL. Relationship is persisted in graph store.  Push down compute

Analytics- [retrospective, streaming, predictive ] 

Analytic Workloads – Columnar Format. Data in Columns tends to repeat, very compressible.  Analytic workloads are large and usually partitioned. Large compute size. Heavy compute push-down. Little to no updates, nee lots of memory and in-memory compute.

Analytic Workloads – Primary decisions to consider: Streaming or not, latency requirements, ETL or no ETL Serverless or dedicate compute. Always active or occasionally active.  What is your data format?

Amazon Athena – interactive analysis products. Treat data at rest [ structure or unstructred ] like a DB and you can query it. Zero set up cost, point to S3 and start querying.  pay per query.  ANSI SQL interface. Zero administration. Serverless.  Doing retrospective analysis, develop trends.

RedShift – Fast, Powerful – data warehouse, schema and pattern are understood, ectremely fast queries at scale. Resize cluster up and down. Data encrypted at rest and intransit.  Manage your own keys with KMS. Inexpensive. Doing retrospective analysis, develop trends

Kinesis – Doing real-time analytical. process realtime data with SQL.  Example would be a customer support case, you want to react in real time. Or brake sensors on a train. You dont have time to index that into a Datawarehouse, you need it now.

Elasticsearch – good for log analysis, full text search and application monitoring. More flexible, more natural ways. IT has Kibana bundled in

 

Posted in AWS, AWS Certified Solutions Architect, Uncategorized | Leave a comment

Lambda Access Key Age Checker using Python

How old are those Keys, anyway? 

The story goes like this: I needed to automate a way of knowing how old every API Key is and the user to which it belongs –  in all the AWS accounts I work with.

I went to look for  a Lambda script on the interwebs that would do some checks on Access Key age for me. Seems like a basic thing you think would be out there.. but, no – no one had done this in this way. At least what I could find.  I did find some code pieces that others had done for pulling out just the key age, but I ended up putting most all of this together myself and learning a lot!

So what does this script do? Specifically, leveraging boto3 libraries, it’s calls out to IAM and return all usernames into a list, then, the next loop iterates through that list of users  – doing the following for each: get the key for each user and parse the create date against current date, if the key is older than 90 days,  append that user and key age to a new list – then, convert that new list into a string, (so it complies with the SNS message type),  pass the string to SNS – and SNS then emails out to those who have subscribed to that topic.

Lambda Access Key Age Checker using Python

Here is the code in my github.   The Python 2.7 code does exactly what is stated above. Also here is the JSON policy that gives Lambda access to IAM, SNS, and CloudWatch Logs that you will need to attach to a Lambda Execution Role that will need to be created for this function to run.

Lambda has a default function timeout of 3 seconds. In on of the regions in which I implemented this, the code took 5 seconds to run, so I had to increase the default timeout to 5 seconds, fyi –

I set up a CloudWatch Events Rule to Trigger this Lamba using AWS CRON Expressions

I don’t claim to be a Developer,  but I do love automating things with code and this script works, and works well, repeatedly.  I hope it works will for you, too!

Oh, one last thing!  This script could also be easily modded to disable keys that are older than 90 days, which is the next logical step after creating user awareness of key age and implementing and communicating a policy on key age.

Posted in AWS, Lambda | Leave a comment

AWS S3 Bucket Policy Batman Style

 

As easy as bucket Policies are supposed to be, I fought for some hard won knowledge on how to bolt together what I thought was going to be a simple policy.  After banging my head, I called AWS support for help – so I wanted to share the win.

My goal: Apply an S3 Bucket policy that limits access to only IAM users and a IAM Role in the same account as the Bucket.

AWS advocates that Bucket Polices are designed for cross account access.  and to do what I wanted to do they recommended IAM policies. That is all good and well, but I believe in tightening security closest to the source when I can.  Bucket Polices are great for that.  I am using condition statements to deny access except for the entity named. So here is what I thought would work:

{

     "Version": "2012-10-17",

     "Id": "BatCavePolicy",

     "Statement": [

        {

               "Sid": "Deny access except NotPrincipal list",

               "Effect": "Deny",

               "NotPrincipal": {

                     "AWS": [

                            "arn:aws:iam::111111111111:user/batman",

                            "arn:aws:iam::111111111111:user/robin",

                            "arn:aws:iam::111111111111:user/catwoman",

                            "arn:aws:iam::111111111111:role/batcomputercontrollerrole"

                    ]

          },

        "Action": "s3:*",

        "Resource": "arn:aws:s3:::batcomputerbucket"

      }

  ]

}

Nope. I locked myself out of the Bucket.  Here’s why:.

Instead of providing the complete Principal ARN AWS needs  the RoleID and userID of the IAM resources for a Bucket Policy of this type. We can get these ID’s by running the AWS CLI commands:

$ aws iam get-role --role-name batcomputerrole

"RoleId": AROZZZZZZZZZZ1

Since the EC2 instance (when assuming a IAM role) is using temporary credentials of the IAM user, We have to use the RoleID “AROZZZZZZZZZZZZ1:*” in the bucket policy. ‘*’ follows the temporary credentials of the IAM user.

$ aws iam get-user --user batman

"UserId": AIDYYYYYYYYYYY1

$ aws iam get-user --user robin

"UserId": AIDWWWWWWWWWWW1

$ aws iam get-user --user catwoman

"UserId": AIDXXXXXXXXXXX1

And then our correct Bucket Policy looks like this:

{

     "Id": "BatCavePolicy",

     "Version": "2012-10-17",

     "Statement": [

         {

                 "Sid": "BatComputerBucket",

                 "Action": "s3:*",

                 "Effect": "Deny",

                 "Resource": [

                        "arn:aws:s3:::batcomputerbucket",

                        "arn:aws:s3:::batcomputerbucket/*"

                   ],

                "Condition": {

                  "StringNotLike": {

                        "aws:userid": [

                             "AROZZZZZZZZZZ1:*",

                             "AIDYYYYYYYYYYY1",

                             "AIDWWWWWWWWWWW1",

                             "AIDXXXXXXXXXX1"

                         ]

                 }

            },

             "Principal": "*"

       }

   ]

}

AWS does have this documented  but, that link only references doing it this way with the role. having to get the user-id element for the users threw me off. I hope this helps.

Posted in AWS, AWS Certified Solutions Architect, Bucket Policy | Leave a comment

2018 AWS Security Specialty BETA Exam

Finally, it’s here!

AWS Certified Security – Specialty Beta Exam

SCS-C01.  registered last night, it is only $150 – which is much better than the usual $300 for each of the other two Specialty Exams.  This beta exam will only be available from January 15th to March 2nd – so I scheduled mine on Feb 28th. 

UPDATE 2/21/2018 –  It appears aCloud.guru has released new content for this exam! You need your own account for acloud.guru to get it – and the price of the course is worth the 99$. The course is still mixed with some older lectures, so I don’t think Ryan is totally done – but there is definitely new content up there!

UPDATE 3/3/2018 – I took the Specialty BETA exam on Feb 28th. The questions were tough, fair – and had very minimal, if any “word trickery” at all. It was the most straight forward certification exam I have ever taken, where you are presented with facts and are choices are well worded. Good job, AWS team!

I can’t really say too much about content, because of the NDA, but I can tell you some general things. Though the BETA is over now . . .

  • IAM Policies are a huge part of the exam, so please understand how all policies work; and when happens when multiple policies overlap one another. [ IAM Ninja video links below ].
  • KMS was also a large part of the exam; so no surprizes there, know your KMS in and out.
  • CloudWatch Agent. Know all the capabilities and what this agent does.
  • IAM Federation.

Also, on acloud.guru,  here is their discussion page with other people discussing their Exam experience.

Now comes the 90 wait to see if I passed. . . .  I’d like to see a PASS, but if I don’t I get  voucher for the general release!

Q: What happens if I do not pass the beta exam?
Candidates who do not pass the beta exam will receive a voucher to re-attempt the AWS Certified Security Specialty exam once it is released.

Ok, now the nitty gritty, what resources were needed for the BETA?

Official Exam Guide

First, here is the pdf of the  AWS Exam Guide for the BETA SCS-C01

Now, here is my resource collection:

I can start by telling you I’ve already purchased the

AWS Certified Security – Specialty Course from acloud.guru

It’s the course from the original BETA exam that came out (early 2017?), but it covers all the fundamentals and the guys at acloud.guru update their content regularly when it comes to Exam courses. I believe the cost on this is $60. Outstanding value!

acloud.guru Founder Ryan Kroonenburg – Ryan sat this exam on Jan 15th in London. He made this video giving general exam experience feedback and he also said that he will be updating the above mentioned acloud.guru AWS Security course based on his experience. UPDATE 2/5/2018 [ a rep from acloud.guru told me that the course would be updated at end of Febuary 2018 ]

WhitePapers

Next, I think this Exam will hit every corner of the AWS Universe, which means diving deep into the AWS Security and Compliance Whitepapers

Out of those, The Well Architected Framework – Security Pillar would be the one to know like the back of your hand.

Re:Invent 2017 Security Vids

After that, the AWS RE:Invent 2017  IAM Policy Ninja Video is an incredible resource and to be sure, I will watch (and practice) this multiple times over the next several weeks. And other RE:Invent 2017 Security Vids:

AWS Philosophy of Security
Architecting Security and Governance Across Multiple-Accounts
Security Anti-Patterns: Mistakes to Avoid
Best Practices for Managing Security Operations on AWS
AWS Security State of the Union
Compliance and Top Security Threats in the Cloud
Incident Response in the Cloud
Five New Security Automation Improvements You Can Make by Using CloudWatch Events and AWS Config Rules
Using AWS Lambda as a Security Team
 CloudTrail to Enhance Governance and Compliance of Ama

Now the AWS recomended Training for the SCS-C01 BETA exam:

AWS Security Fundamentals e-course
Online Resources for AWS Security

Exam Topic Specific Resources SCS-C01

Domain 1: Incident Response

RE:Invent Video: Incident Response in the Cloud

1.1 Given an AWS abuse notice, evaluate the suspected compromised instance or exposed access keys.

I received a notification that my AWS resources or account may be compromised. What should I do?

1.2 Verify that the Incident Response plan includes relevant AWS services

Building a Cloud-Specific Incident Response Plan

1.3 Evaluate configuration of automated alerting and execute possible remediation of security-related incidents and emerging issues

How to Remediate Amazon Inspector Security Findings Automatically
How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events

Domain 2: Logging and Monitoring

2.1 Design and implement security monitoring and alerting.

Designing Centralized Logging
CloudWatch Logging Agent
How to Monitor Host-Based Intrusion Detection System Alerts on Amazon EC2 Instances
How to Receive Alerts When Your IAM Configuration Changes
SID341 – Using AWS CloudTrail Logs for Scalable, Automated Anomaly Detection

2.2 Troubleshoot security monitoring and alerting.

Troubleshoot SNS Deliveries
Troubleshoot SES Notifications

2.3 Design and implement a logging solution.

Logging Whitepaper
How to Monitor and Visualize Failed SSH Access Attempts to Amazon EC2 Linux Instances

2.4 Troubleshoot logging solutions

Troubleshooting CloudWatch Events

Domain 3: Infrastructure Security

3.1 Design edge security on AWS.

AWS WAF
AWS Shield
Protect Dynamic Content using Shield and Route53
Serving Private Content Through CloudFront
SID342 – Protect Your Web Applications from Common Attack Vectors Using AWS WAF
SID401 – Let’s Dive Deep Together: Advancing Web Application Security

3.2 Design and implement a secure network infrastructure.

Setting Up an AWS VPN Connection – Amazon Virtual Private Cloud
VPN Connections – Amazon Virtual Private Cloud – AWS Documentation
Well Architected Framework – Security Pillar
EC2 Systems Manager

3.3 Troubleshoot a secure network infrastructure.

Troubleshooting – Amazon Virtual Private Cloud – AWS Documentation
Troubleshoot Connecting to an Instance in a VPC – AWS – Amazon.com
Troubleshooting AWS Direct Connect – AWS Documentation
VPN Tunnel Troubleshooting – AWS – Amazon.com

3.4 Design and implement host-based security

IDS and IPS for EC2 Instances
How to Monitor Host-Based Intrusion Detection System Alerts on Amazon EC2 Instances
Amazon Inspector – Security Assessment Service

Domain 4: Identity and Access Management

4.1 Design and implement a scalable authorization and authentication system to access AWS resources.

 

LIST OF IAM PERMISSIONS

IAM JSON POLICY ELEMENTS

IAM POLICY EVALUATION

AWS Identity and Access Management (IAM) Documentation
IAM Best Practices – AWS Identity and Access Management
Enabling SAML 2.0 Federated Users to Access the AWS Management …
SID337 – Best Practices for Managing Access to AWS Resources Using IAM Roles
AWS Cognito
SID344 – Soup to Nuts: Identity Federation for AWS
S3 Bucket Policy Examples

4.2 Troubleshoot an authorization and authentication system to access AWS resources.

Troubleshooting IAM – AWS Identity and Access Management
Troubleshooting IAM Roles – AWS Identity and Access Management
Troubleshoot IAM Policies – AWS Identity and Access Management
Troubleshooting Amazon EC2 and IAM – AWS Identity and Access …
Troubleshooting Amazon S3 and IAM – AWS Identity and Access …

Domain 5: Data Protection

5.1 Design and implement key management and use.

AWS Encryption SDK
AWS Key Management Service Concepts – AWS Documentation
RE:Invent Video – Best Practices for Implementing KMS
Whitepaper – Best Practices for KMS
SID345 – AWS Encryption SDK: The Busy Engineer’s Guide to Client-Side Encryption
Amazon Macie

5.2 Troubleshoot key management.

Verifying and Troubleshooting KMS Key Permissions – AWS .
Determining Access to an AWS KMS Customer Master Key – AWS Key …
Limits – AWS Key Management Service – AWS Documentation
Troubleshooting Key Signing Errors

5.3 Design and implement a data encryption solution for data at rest and data in transit.

How to Protect Data at Rest with Amazon EC2 … – AWS – Amazon.com
Encrypting Amazon RDS Resources – AWS Documentation
Encrypting Data at Rest ( non AWS BLOG )
Amazon Certificate Manager 
How to Encrypt and Decrypt Your Data with the AWS Encryption CLI
How to Address the PCI DSS Requirements for Data Encryption in Transit Using Amazon VPC
Architecture for HIPAA Compliance on AWS

The Full List of the Security, Compliance, and Identity Sessions, Workshops, and Chalk Talks at AWS re:Invent 2017

Based on acloud.guru Founder Ryan Kroonenburg’s Feeback on the Exam, I’ve added some more study links:

Cloud HSM FAQs
Cloud HSM AWS Documentation
Protecting Data Using Server-Side Encryption with Amazon S3-Managed Encryption Keys (SSE-S3)
Protecting Data Using Client-Side Encryption in S3
IAM Policies and Bucket Policies and ACLs! Oh, My!
Posted in AWS, AWS Certified Solutions Architect, Cloud Security, Cyber Security | Leave a comment

SamSam Malware Hit Close to Home

 

This morning, the Denver Post reported that a variant of the SamSam malware struck Colorado Dept of Transportation (CDOT), affecting 2000 computers. 

“TrendMicro said the attack wasn’t due to an employee opening an infected email, but hackers gained access remotely using a vendor’s user name and password”

The computers that were compromised were running McAfee Anti-Virus.

My own take on this:

First, props to the team at CDOT for having all of their data backed up, so they did not need to pay the ransom, GREAT JOB gals and guys!

Second, this story backs up my own experience that legacy Anti-Virus products, (like the one mentioned above),  are not designed to detect and stop today’s advanced malware.  Legacy Anti-Virus products are good for checking audit boxes, but will fail you in the trenches, as seen here.

Instead, look to products like FireEye, PaloAlto Networks and MalwareBytes. The first two in that list use a combination of a software client on the host machine, upstream sandbox hardware and real-time cloud intel;  which act as a unified solution to detect and prevent advanced malware. For a stand-alone client, I’ve seen the MalwareBytes product find and remove malware artifacts that other solutions did not see. CarbonBlack also has a solid end point offering.

Third, Identity Access Management (IAM) is key!  Lost or stolen creds appear to be at the heart of many of the high profile compromises. (Think Target).  A solid IAM system integrates with all authorization systems + tie credentials to resources and roles; combine this with logging  – and you can go a long ways.  Every credential should be tied to a role, and lock down the role access based on strict job requirements based on principal of least privilege. All Log-ins should be monitored and base-lined so any deviations form the norm will generate an alert. I have used the SailPoint Product for centralized IAM in the past and it performs well.

My thoughts on this blog are only to educate the Security Professionals who protect us from the bad guys; and not a as a criticism.  Again, I have to tip my hat the the hard working team at CDOT for their back-up and recovery of their systems. They are demonstrating true resiliency.

Disclaimer: I do not work for; nor am I paid by any vendor listed in this post.

Posted in Uncategorized | Leave a comment