Thinking about Information Security in 2018 and beyond

I have been thinking about the direction Information Security will take these next five years. The pace at which change is occurring in the daily lives of human beings is simply astounding; with new technology making its way into our homes and garages with unprecedented speed and ease. So, here are some things to think about when it comes to the next generation of Information Security.

Security of Electronic Personal Assistants ( think Alexa, Siri, OK Google and whatever Samsung is building).  As new “skills”/ features/capabilities get added to Electronic Assistants, ( presently on Amazon, there are 12,000 plus and growing for Alexa ), the more they will be able to do, thus the potential for an effective hack goes up as well. Another way of putting it, the more aspects a of our lives a Digital Assistant can affect, the deeper or more intrusive a hack can be.

Much of the focus around security for Digital Assistants has been in the development, (secure APIs, signed requests from API to cloud service, and again, in Amazon, Security using IAM Roles to determine what resources an API call can access). Although this is good, it leaves out a large piece of the pie, and that is: authorization, ANYONE can talk to my Alexa, Google Assistant or Siri and get them to perform tasks. Burger King already did this by hijacking Google Assistant.

I am aware that some major banks have been working to gain an edge and build Alexa skills that help sell their services, things like signing up for new credit card offers, being able to book a trip on your new bankcard. Electronic Assistants also have access to our calendar, they can call for emergency services, order a Lyft, post to Slack, check your bank balance, turn your lights on and off, activate your Roomba – you get the idea – Electronic Assistants can do things that create a physical action in the real world. 

So, really then, all someone would have to do to have access to all of the contexts of your life which your Electronic Assistant controls, is, one- break in to your house, or two, set up their own Electronic Assistant on your account from wherever they are. One method requires bypassing a $40 dollar quickset lock, the other requires your knowing the name and password of the service hosting your Electronic assistant.

To solve this, our Electronic assistants will need to know our voices, and only take commands from their authorized owners.  Combine this with Two Factor, maybe an RFID ring you could wear, or a verbal passphrase that would “open” your Electronic assistant for 5 minutes so you can command it. Vendors could also program location into them; and allow them only to work from inside your home – cutting off the chance of someone setting one up in another part of the world.

The second topic where I see Information Security will be needed is:

Security of Autonomous Vehicles

There was a revelation lately that you can confuse a self-driving car by placing stickers on a stop sign 

I am certain that example is just one of thousands of methods which could be performed in the real world to disrupt the sensors and programing on Autonomous cars. My mind can come up with all kinds of other ways:  additional yellow stripe paint on the road, the use of high intensity light (laser pointers) to interfere with cameras on cars, placing a mannequin in the road to stop an autonomous car indefinitely, create a condition of conflict which the programmers did not anticipate, red and green lights at the same time, driving with a silhouette cardboard person attached to the front of the car. You get the idea. Autonomous vehicles will be easy to disrupt. ( at least the first few generations anyway )

Again, first thing that comes to mind in thinking about solutions is using some kind of two-factor method here may prove helpful – provide a second method of verifying whether a condition is true or false. For the stop sign, in combination of recognition of an octagon shape with letters, perhaps the car could pre-verify the existence of the sign by querying an enhanced map database that has citywide sign locations and meaning, ( yes, I know that can be hacked too, but difficulty goes up from slapping a sticker on the sign to a full fledge DB hack – ) Add another factor, make the signs, stoplights, guardrails ” intelligent ” and they could have their own unique electronic signature and communicate with the car. Failsafes are a given when conditions are not met. Humans could also be a second factor when the machine cannot verify if a condition is true, the car could query you. ( although that does not instill the kind of faith car companies want people to have in autonomous cars).

Last, use machine learning as a solution.  Most of us drive to the same places everyday; and the car could be taught all that it needs to know from the route the first few times through. This method, with a combination of other methods would make autonomous cars more safe for human beings. So… for all you security die hards out there, YES, all of these solutions can be hacked. The level of complexity rises, however. I am only suggesting methods to make autonomous cars MORE secure and safe, fully recognizing that the only safe way to not have your car hacked is to buy a car from 1983 or before; or if you are really paranoid, roll 19th century style with horse and buggy, top hat and monocle.

Stay Safe! Stay Secure!

Advertisements
Posted in Uncategorized | Leave a comment

Passed AWS SysOps Associate Exam!

Hi! I passed the AWS SysOps Associate exam, so I wanted to spend a few minutes and give my thoughts on it. I can’t give any data on actual questions, because that breaks the AWS NDA, but I do have some personal insight I will share.

As a Multiple Choice Question exam, with no exhibits, screens shots or AWS CLI emulators – I found the AWS SysOps exam experience to diverge somewhat from the actual experience of using the AWS console and CLI in real life. That’s not saying you don’t have to know your stuff, because you do – I am saying that if the test authors spent the same amount or more time incorporating console screen grabs (or using an emulator to have the test taker type in AWS CLI commands), as they did on pure word trickery, this exam would truly be great. As the exam stands, you have to be good at both using AWS console and CLI as well as using your mind to abstract the AWS GUI experience into the written word MCQ format.

The exam was heavy on Autoscaling. In fact it could have been called the AWS auto-scaling exam. Many different scenarios were presented and the best Scaling solution had to be selected. Know Connection draining and Load Balancing in and out – I was hit hard on those!

Second, the exam was heavy on CloudWatch, (as it should be), know all your CW metrics, which services have the 1 min metrics by default, CW namespaces, etc . . . This makes sense, as a good SysOps person knows where to get, and how to read logs. Again, I can’t mention the importance of knowing CloudWatch inside and out. Know the Cloud Watch API calls. Read the CloudWatch FAQs.

Third, know your VPCs, Routing and Security tools. Know which subnet is the “main” when you use the VPC wizard to make a VPC with Public and Private subnets. Know which resources the VPC wizard spins out for each of the four Wizard types; and know if you can delete those resources in each instance. (To study for this I did labs of each, a few different times and tried deleting things, and noted what was left). Know when you need to use the routing table; what it does and where you need it – I had a few scenarios where I was asked about which routes go where; so yeah.. you need to know routing.

Ahh yes, the Security tools. Know your SSE-C for S3, how it works with the API, what the SSE-C API sends in each call, etc . .  Know the Bucket Policies in and out, know how to READ JSON bucket polices and what they do; when denys “trump” alloweds, etc. .Know the recommended Security settings AWS has for console users, best practices, which services Amazon is responsible for vs. which ones the customer is responsible for. Know, NACL, vs. Security groups, when and where you use each one. IAM Fundamentals and basic policies are a must.

Last, know all your S3, EC2, EBS, basics – I actually went over all my notes (and class material) from the Arc-Associate Exam, because, yes, there is some overlap.

Sources: I used ACloudGuru Online Courses SysOps class to train, Ryan Kroonenberg is a GREAT instructor; the class is 16 hours – I went through it a couple of times, but this is enough for the foundations only, but not enough to pass the exam.  I read the FAQs for all services ( again and again), did any practice questions I could get my hands on – and lab’d things up again and again. Lots of Reading the FAQs on all services.

As a Security Professional, I feel that understanding the interworks of AWS SysOps will aid in securing applications in AWS, protecting the services that run as well as understanding where built-in AWS tools are not enough; and where I might need other vendors to fill the gaps. I was glad the exam hit on some of the Security aspects of AWS – can’t do that enough. 🙂 After both exams, I still feel like the AWS learning is really just beginning.

I hope AWS ( and other vendors ) continue to move away from the MCQ format for certification exams and move toward more what Cisco and RedHat do; the use of emulators for hands-on to test student knowledge.

Thanks for hanging out with me! I hope this helps!

 

 

Posted in AWS, AWS Certified Solutions Architect | Leave a comment

SOPHOS – Security SOS Botnet Webinar Write-up

“SOPHOS – Security SOS Botnet Webinar” Write-up by Chris Henson

VERY early last Thursday, I attended the Sophos Security SOS ‘ Botnets – Malware that makes you part of the problem’ Webinar.  The Webinar was early because it was hosted late in the day in the UK. The main speaker was Paul Ducklin. Paul knows his stuff when it comes to malware; as do many Engineers at Sophos; as that team has some of the most extensive technical  write-ups on malware behavior out there.

As usual, I took notes, so I wanted to share them here:

-BEGIN WEBINAR NOTES –

Info about Botnets:

There is a rise in Bot Builder Tools, semi-custom software packs; where the operator can customize phishing [dropper ] campaigns; and can utilize the bots in a variety of ways. Bots can be customized to report back / call home on specific attributes of the computer on which they take over: Current level of patch installed, Disk space, GPU, memory, enumerate processes running, enumerate security products installed, etc. . .

Web based Botnet consoles have knobs / dials / tools and give out various types of information about the botnet they control in dashboard layout; Geo-location, OS type, etc, target, who reported in, how long ago. .

This data can then be used to conscript the bot into a specific type of botnet:

  • if you have infected many machines with high GPU capabilities, then those machines could go to a bit-coin mining botnet.
  • if initial infect is a corporate machine; the data about security tool sets in stalled may be valuable to other bad guys.
  • if the machines are found that have HUGE diskspace, those machines become part of a storage botnet.
  • If you are a average machine, or an IoT, you get conscripted into a DDoS botnet that can be rented out.

Bots – smaller, more basic kits, simply act as downloaders :

  • for other kinds of Software, sometimes even “legitimate” ad-ware where companies are paid on each time their ad-ware is installed.
  • for more specific botnets, tbd later by the attacker, SPAM, keylogging
  • when machine is sold to other bad guy, they decide what to download
  • multiple bots [ a machine can be a part of more than one botnet ]

Bots and Ransomware:

After a Bot has exceeded its useful life; the attacker may try to get another $200 – $600 and have the bot’s last job be to install ransomware.  The reverse is true. Ransomware can also have extra code that install bots; so even after you pay, the machine is still infected.

Keeping bots off your Computer:

  • Patch, patch and patch – reduce the risk surface.
  • Remove Flash from your machine [ Adobe Flash has been #1 target of infections ]
  • Do not run JAVA in your browser
    • Oracle recently modified base JAVA install to run as an App only on machine; and NOT as an applet in their browser
  • Things like home router; cameras, IoT, always get the latest vendor firmware
    • if device is old, and vulnerable, time top scrap it and get a new one

Detecting bots:

  • Microsoft Sys internal tools set to see process.
  • Wireshark Tools
  • [ my own note ] Security Onion with BRO, ELSA installed, getting a tapped or spanned feed from suspected machine

 

Posted in Uncategorized | Leave a comment

AWS Certified SysOps Administrator – Associate: Study Sheet – Monitoring Section

Hi friends, I recently passed the AWS Certified Solutions Architect – Associate Exam. Woo Hoo! And although this was a cool accomplishment, I feel that I barely have my toe in the water when it comes to knowledge with in AWS – so I have to keep going! I am knowledge hungry! After tossing a coin to choose between next AWS Exam,  Routing specialty or SysOps, the AWS SysOps Exam won out. Here is the first of a series of study sheets for the SysOps.

AWS CloudWatch  comes up first on the Monitoring section.  As Amazon puts it:  “Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources”

CloudWatch is a metrics repository; AWS service places metrics in the repository; and you the AWS user, view statistics from those metrics. Custom metrics are supported.

Namespace is a container for CloudWatch metrics. Metrics in separate namespaces are isolated from one another.

Metric represents a time-ordered set of data points that are published to CloudWatch. Each metric must be marked with a timestamp.

The timestamp can be up to two weeks in the past and up to two days in the future.  Timestamps are based on current time in UTC. 

CloudWatch retains metric data:

  • Data Points gathered every 60 seconds are available for 15 days.
  • Data Points gathered every 300 ( 5min) are saved for 63 days
  • Data Points gathered every 3600 sec ( 1 hr ) are saved for 455 days (15 months )

CloudWatch is also supports the concept of alarms, derived from the metrics in the repository. “An alarm watches a single metric over time; and performs one or more actions, based on the value of a metric threshold over time.” Alarms create actions when a service is in a specific state for a sustained period of time.

Dimensions are name/value pairs that identifies a metric. AWS services that send data to CLoudWatch attach dimensions to each metric. Dimensions are used to filter results, Example, you can get stats for an EC2 instance by calling the ‘InstanceId’ Dimension. You can assign up to ten dimensions to a metric.

Statistics are metric data aggregations over time. CloudWatch provides statistic based on the metric data points provided by custom data / or by other services in AWS. Aggregations are made using the namespace, metric name, dimensions and the datapoint of each unit of measure, within the time you call out.  [ min, Max, Sum, Average ] Each static has a unit of measure. If you do not specify, CloudWatch uses None as the unit. 

Period is the length of time associated with a specific AWS CloudWatch statistic.

CloudWatch Limits:

  • Alarms: 10 month/ customer free 5000 per region per account.
  • API requests 1,000,000 / month / customer free
  • Dimensions: 10 per metric
  • Metric Data: 15 months

4 Standard [default ] CloudWatch Metrics are:

  • CPU, Disk, Network and Status Checks

Memory Metrics statistics are NON- standard / Non -default on Cloudwatch

Two types of status checks:

  • System Status Checks [ for underlying physical host ] [ start /stop VM to resolve ]
  • Instance Status Checks [ for actual VM ] [ reboot instance to resolve ]

EBS Monitoring  on Cloudwatch

Two types of Monitoring for EBS

  • Basic: 5-minute periods at no charge. This includes data for the root device volumes for EBS-backed instances.
  • Detailed: Provisioned IOPS SSD (io1) volumes automatically send one-minute metrics to CloudWatch

EBS sends data to CloudWatch, several metrics for these storage types:

  • Amazon EBS General Purpose SSD (gp2),  # the () denotes the API name
  • Throughput Optimized HDD (st1)
  • Cold HDD (sc1) volumes automatically send five-minute metrics to CloudWatch
  • Magnetic (standard) volumes automatically send five-minute metrics to CloudWatch.
  • Provisioned IOPS SSD (io1) volumes automatically send one-minute metrics to CloudWatch. # SUPER fast, high IOPS!

Specific EBS metric names are here – with special emphasis on VolumeQueueLength: “The number of read and write operation requests waiting to be completed in a specified period of time”. If this increments, your disk IOPs may need increase.

Two Volume status metrics to which you should pay attention:

  • warning:  means: “Degraded (Volume performance is below expectations) Severely Degraded (Volume performance is well below expectations”
  • impaired means: “Stalled (Volume performance is severely impacted) Not Available (Unable to determine I/O performance because I/O is disabled)” Your Volume is basically hosed!

Burst Balance

EBS Burst Balance Percent Metric  is described here and here are my notes:

  • General Purpose SSD (gp2) EBS volumes have a base of 3 IOPS per GiB of Volume size, Max Volume of 16,384 GiB and Max Burstable IOPS size of 10,000 [ if you exceed this, you need to move to a Provisioned IOPS SSD (io1) ]

Cloud Architect Dariusz Dwornikowski describes the i/o credit concept for burst balance very well in his blog. ” think of I/O credits as of money a disk needs to spend to buy I/O operations (read or write). Each such operation costs 1 I/O credit.When you create a disk it is assigned given an initial credit of 5.4 million I/O credits. Now these credits are enough to sustain a burst of highly intensive I/O operations at the maximum rate of 3000 IOPS (I/O per second) for 30minutes. When the balance is drained, we are left with an EBS that is totally non-responsive.”

Pre-Warming EBS – Initializing a snapshot reading all blocks before you use it for best performance .

RDS Monitoring 

  • Per Amazon: “Amazon Relational Database Service sends metrics to CloudWatch for each active database instance every minute. Detailed monitoring is enabled by default.”
  • In RDS itself, you monitor RDS by EVENTS
  • In CloudWatch you monitor RDS by Metrics 

Two metric to which you should pay close attention in RDS:

  • Replica Lag “The amount of time Read Replica DB instance lags behind source DB [ SQL, Maria, PostGRESQL ] “
  • DiskQueueDepth “The number of outstanding IOs (read/write requests) waiting to access the disk”

Elastic Load Balancer Metrics for CloudWatch

  • ELB only reports metrics only when there is traffic. Or as Amazon puts it “If there are requests flowing through the load balancer, Elastic Load Balancing measures and sends its metrics in 60-second intervals. If there are no requests flowing through the load balancer or no data for a metric, the metric is not reported.”
  • HealthyHostCount metric “he number of healthy instances registered with your load balancer.”
  • Other useful counters are statis that backend pool members send:  HTTPCode_Backend_2XX,HTTPCode_Backend_3XX,HTTPCode_Backend_4XX,HTTPCode_Backend_5XX

ElasticCache Monitoring CloudWatch

Which metrics should I monitor?   #is the source for the below information:

  • Metrics for Memcached  CPUUtilization – This is a host-level metric reported as a percent. For more information, see Host-Level Metrics. Since Memcached is multi-threaded, this metric can be as high as 90%. If you exceed this threshold, scale your cache cluster up by using a larger cache node type, or scale out by adding more cache nodes.
  • SwapUsage: This metric should not exceed 50 MB. If it does, we recommend that you increase the ConnectionOverhead parameter value.

 

  • Metrics for Redis:
  • CPUUtilization Redis is single-threaded, the threshold is calculated as (90 / number of processor cores). For example, suppose you are using a cache.m1.xlarge node, which has four cores. In this case, the threshold for CPUUtilization would be (90 / 4), or 22.5%.
  • SwapUsage: No recommended setting with Redis, you can only scale out

       

         Evictions This is a cache engine metric, published for both Memcached and Redis cache clusters. We recommend that you          determine your own alarm threshold for this metric based on your application needs.

  • Memcached: If you exceed your chosen threshold, scale you cache cluster up by using a larger node type, or scale out by adding more nodes.
  • Redis: If you exceed your chosen threshold, scale your cluster up by using a larger node type”
Posted in AWS | Leave a comment

One click could have protected the data of 198 Million People: Amazon AWS

 

A major Security event involving a breach caused by user error occurred recently in AWS. Website thehill.com reports   that “25 terabytes of files contained in an Amazon cloud account that could be browsed without logging in”  These files were RNC owned and had voter data.

Being that the article read “25 TB of Files“, it’s not too far of a stretch to say these were files[objects] stored in an S3 Bucket, or (buckets).  Here is the crazy thing, from an AWS Security perspective, literally one click could have protected all the files.  [ Simply un-checking “read” from the everyone group ].  Take a look at the screen shot down the page a bit to see exactly what I mean.

In this instance, the contractor managing the RNC Voter Data Files strayed away from the default S3 bucket configuration, which is:

” By default, all Amazon S3 resources—buckets, objects, and related subresources (for example, lifecycle configuration and website configuration)—are private: only the resource owner, an AWS account that created it, can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy.”

One can only guess that the Contractor in-charge of these files was trying to give access to a small group of people that did not have AWS accounts and simply checked the Read for Everyone on the Entire Bucket.

Even if this was the practice, the fact that the Read for Everyone was left checked over time is simply . .mind boggling.

There are so many way this could have been prevented. Bucket Policies come to mind as well; where among many of the custom security access policies you can create; access controls can be applied to lock down a bucket for  anonymous users, (users without AWS accounts), by specifying referring website or their source IP if extended read access is needed.

Amazon make it easy to secure S3 buckets, first, by the Default Policy, second, by literally having a place where you can click one box. Simply no excuse for this breach!

[update: The original find is here on upgaurd.com  and confirmed my suspicion below, that the files were indeed stored in an S3 bucket! ]

Posted in Uncategorized | Leave a comment

Sony PlayStation 2017 E3 Ticket Site [ www.gofobo.com ] Down ALL DAY

 

Like many other enthusiasts, I was excited to get the opportunity to purchase tickets to my Local Theatre to experience Sony PlayStation’s E3 LIVE simulcast on the big screen!

The link to get tickets is at this site:

https://www.playstation.com/en-us/campaigns/2017/e3experience/

which points to a 3rd party ticket provider, gofobo.com  with this URL:

http://www.gofobo.com/PlaystationE32017

At approximately 10 AM PT http://www.gofobo.com/  CRASHED HARD and has been down ever since. 

Main response code all day has been http 503 – Service Unavailable.  Now it is showing as a 404 not found. ( screen shot above ) One attempt earlier in the afternoon brought up the main gofobo.com page once today; but then said that the “PlayStationE32017” code was invalid.

Earlier today, GoFobo had two public IP’s registered; I tried them both. No go.

All other requests have hung, or been met with 503 (until now which has turned to 404) –  I think this is really gofobo.com simply being overwhelmed  by Sony Playstation fans – FAILURE TO SCALE. This could have been another intentional, malicious DDoS against Sony, or maybe perhaps a human error killed it.   I was able to get tickets within 5 mins last year and I don’t remember gofobo.com as part of that.   That 404 happening on their main site is because I believed they moved their site to new digs:

At present,  4:11 PT, it appears they are shifting their DNS records around . . .. ( there were only two IP entries before and they were different IPs ) from a previous dig at 1 PM.

Here is a DIG now:

;; ANSWER SECTION:

http://www.gofobo.com. 148 IN CNAME screenings-346088557.us-west-2.elb.amazonaws.com.

screenings-346088557.us-west-2.elb.amazonaws.com. 59 IN A 54.191.95.244

screenings-346088557.us-west-2.elb.amazonaws.com. 59 IN A 52.35.41.68

screenings-346088557.us-west-2.elb.amazonaws.com. 59 IN A 52.32.184.40

screenings-346088557.us-west-2.elb.amazonaws.com. 59 IN A 52.25.144.120

So… looks like they are moving this to AWS! Thinking this move happened when the 503 error code became a 404.

Posted in Uncategorized | Leave a comment

Amazon AWS Certified Solutions Architect SWF / SQS Study Sheet

Simple WorkFlow Service – SWF

Web service to coordinate  work across distributed application components [ Human tasks outside of process can be included as well ]  – Tasks represent invocations of logical steps in Applications.

SWF Task is assigned once, never duplicated.

SWF Tasks can be stored for up to one year

SWF keeps track of all tasks in an application

SWF ACTORS

  • Workflow Starters [ Application or event ] that kicks off workflow
  • Workflow Deciders Controls the flow of activity based on outcomes of task state
  • Activity Workers – programs that interact with SWF to get tasks, process them and return results

Simple Queue Service – SQS

SQS is a Web Service that gives access  to message queues that can be used to store messages while they are waiting to be processed.

SQS is a distributed Queue System that enables applications to queue messages that one part of an app generates to be consumed by another [ de-coupled ] part of that application.

De-Couple Application components so they can run independently; SQS acts as a buffer between components.

SQS is “Pull based” , meaning instances poll and ask it for work.

Messages are 256 KB [ and can be in 64 KB chunks ]

Messages can be stored in SQS for:

  • as little as 1 min
  • DEFAULT of 4 days
  • up to 14 days

For SQS STANDARD QUEUE: VisibilityTimeOut is the amount of time that the message is “invisible” in the SQS queue after a EC2 (or other reading software) retrieves that message.

  • If job is process BEFORE the VisibilityTimeOut expires, messages is deleted from queue
  • If job is not processed within VisibilityTimeOut, the message will become “visible” again and another EC2 will pull it; possibly resulting in same message being delivered twice.

VisibilityTimeOut MAX is 12 hours 

SQS [ Standard Queue ] will guarantee a message is delivered at least once.

  • but will NOT guarantee message order
  • but will NOT guarantee message is ONLY delivered once ( e.g. could be delivered twice )

Long Polling vs. Short Polling: In almost all cases, Amazon SQS long polling is preferable to short polling. Long-polling requests let your queue consumers receive messages as soon as they arrive in your queue while reducing the number of empty ReceiveMessageResponse instances returned.

Long-Polling does not return a response until message is in message queue. [ will save money, because you are not polling an empty queue ]

Short-Polling, returns immediately; even if queue is empty.

Posted in AWS, AWS Certified Solutions Architect | Leave a comment