A very good friend of mine was very concerned about the recent news release on Kaspersky and removed their software from his computer. Probably a good idea to remove Kaspersky, but if you are doing that, there are some bigger, perhaps more effective actions you can take to protect yourself against the bad guys.
It seems like only a week after I wrote about the need for Voice Authentication on Alexa, Engadget published this article about how Siri and Alexa are Vulnerable to nefarious commands. Basically researchers were sending commands in ultra high frequencies and getting the Electronic Personal assistants to respond.
In perhaps a related mystery, often times during a movie or a TV show, I will observe where Alexa will wake up and respond to the movie, usually with “I don’t know about that” when none of the dialog said anything that resembled ‘Alexa’. Makes me wonder about how long people have known about the vulnerability that Engadget found, and have been exploiting it in front of our very ears.
Hmmm. Let’s take it a step further… what if there were a special Alexa skill, where it was coded such that Alexa did not verbally respond, but could kick off a function in the background. . . it is possible. You then could secretly tell an Alexa to execute a task with your recorded ultra high frequency command, and then Alexa ‘could’ quietly execute the task, all without knowledge of the Alexa owner.
Until I can get voice authentication for my Echo – Alexa won’t be hooked up to anything that can cause too much havoc. I don’t want to be watching ‘Ghost in the Shell’ and have a case of canned unicorn meat from Amazon show up on my front porch the next day.
I have been thinking about the direction Information Security will take these next five years. The pace at which change is occurring in the daily lives of human beings is simply astounding; with new technology making its way into our homes and garages with unprecedented speed and ease. So, here are some things to think about when it comes to the next generation of Information Security.
Security of Electronic Personal Assistants ( think Alexa, Siri, OK Google and whatever Samsung is building). As new “skills”/ features/capabilities get added to Electronic Assistants, ( presently on Amazon, there are 12,000 plus and growing for Alexa ), the more they will be able to do, thus the potential for an effective hack goes up as well. Another way of putting it, the more aspects a of our lives a Digital Assistant can affect, the deeper or more intrusive a hack can be.
Much of the focus around security for Digital Assistants has been in the development, (secure APIs, signed requests from API to cloud service, and again, in Amazon, Security using IAM Roles to determine what resources an API call can access). Although this is good, it leaves out a large piece of the pie, and that is: authorization, ANYONE can talk to my Alexa, Google Assistant or Siri and get them to perform tasks. Burger King already did this by hijacking Google Assistant.
I am aware that some major banks have been working to gain an edge and build Alexa skills that help sell their services, things like signing up for new credit card offers, being able to book a trip on your new bankcard. Electronic Assistants also have access to our calendar, they can call for emergency services, order a Lyft, post to Slack, check your bank balance, turn your lights on and off, activate your Roomba – you get the idea – Electronic Assistants can do things that create a physical action in the real world.
So, really then, all someone would have to do to have access to all of the contexts of your life which your Electronic Assistant controls, is, one- break in to your house, or two, set up their own Electronic Assistant on your account from wherever they are. One method requires bypassing a $40 dollar quickset lock, the other requires your knowing the name and password of the service hosting your Electronic assistant.
To solve this, our Electronic assistants will need to know our voices, and only take commands from their authorized owners. Combine this with Two Factor, maybe an RFID ring you could wear, or a verbal passphrase that would “open” your Electronic assistant for 5 minutes so you can command it. Vendors could also program location into them; and allow them only to work from inside your home – cutting off the chance of someone setting one up in another part of the world.
The second topic where I see Information Security will be needed is:
Security of Autonomous Vehicles
There was a revelation lately that you can confuse a self-driving car by placing stickers on a stop sign
I am certain that example is just one of thousands of methods which could be performed in the real world to disrupt the sensors and programing on Autonomous cars. My mind can come up with all kinds of other ways: additional yellow stripe paint on the road, the use of high intensity light (laser pointers) to interfere with cameras on cars, placing a mannequin in the road to stop an autonomous car indefinitely, create a condition of conflict which the programmers did not anticipate, red and green lights at the same time, driving with a silhouette cardboard person attached to the front of the car. You get the idea. Autonomous vehicles will be easy to disrupt. ( at least the first few generations anyway )
Again, first thing that comes to mind in thinking about solutions is using some kind of two-factor method here may prove helpful – provide a second method of verifying whether a condition is true or false. For the stop sign, in combination of recognition of an octagon shape with letters, perhaps the car could pre-verify the existence of the sign by querying an enhanced map database that has citywide sign locations and meaning, ( yes, I know that can be hacked too, but difficulty goes up from slapping a sticker on the sign to a full fledge DB hack – ) Add another factor, make the signs, stoplights, guardrails ” intelligent ” and they could have their own unique electronic signature and communicate with the car. Failsafes are a given when conditions are not met. Humans could also be a second factor when the machine cannot verify if a condition is true, the car could query you. ( although that does not instill the kind of faith car companies want people to have in autonomous cars).
Last, use machine learning as a solution. Most of us drive to the same places everyday; and the car could be taught all that it needs to know from the route the first few times through. This method, with a combination of other methods would make autonomous cars more safe for human beings. So… for all you security die hards out there, YES, all of these solutions can be hacked. The level of complexity rises, however. I am only suggesting methods to make autonomous cars MORE secure and safe, fully recognizing that the only safe way to not have your car hacked is to buy a car from 1983 or before; or if you are really paranoid, roll 19th century style with horse and buggy, top hat and monocle.
Stay Safe! Stay Secure!
Hi! I passed the AWS SysOps Associate exam, so I wanted to spend a few minutes and give my thoughts on it. I can’t give any data on actual questions, because that breaks the AWS NDA, but I do have some personal insight I will share.
As a Multiple Choice Question exam, with no exhibits, screens shots or AWS CLI emulators – I found the AWS SysOps exam experience to diverge somewhat from the actual experience of using the AWS console and CLI in real life. That’s not saying you don’t have to know your stuff, because you do – I am saying that if the test authors spent the same amount or more time incorporating console screen grabs (or using an emulator to have the test taker type in AWS CLI commands), as they did on pure word trickery, this exam would truly be great. As the exam stands, you have to be good at both using AWS console and CLI as well as using your mind to abstract the AWS GUI experience into the written word MCQ format.
The exam was heavy on Autoscaling. In fact it could have been called the AWS auto-scaling exam. Many different scenarios were presented and the best Scaling solution had to be selected. Know Connection draining and Load Balancing in and out – I was hit hard on those!
Second, the exam was heavy on CloudWatch, (as it should be), know all your CW metrics, which services have the 1 min metrics by default, CW namespaces, etc . . . This makes sense, as a good SysOps person knows where to get, and how to read logs. Again, I can’t mention the importance of knowing CloudWatch inside and out. Know the Cloud Watch API calls. Read the CloudWatch FAQs.
Third, know your VPCs, Routing and Security tools. Know which subnet is the “main” when you use the VPC wizard to make a VPC with Public and Private subnets. Know which resources the VPC wizard spins out for each of the four Wizard types; and know if you can delete those resources in each instance. (To study for this I did labs of each, a few different times and tried deleting things, and noted what was left). Know when you need to use the routing table; what it does and where you need it – I had a few scenarios where I was asked about which routes go where; so yeah.. you need to know routing.
Ahh yes, the Security tools. Know your SSE-C for S3, how it works with the API, what the SSE-C API sends in each call, etc . . Know the Bucket Policies in and out, know how to READ JSON bucket polices and what they do; when denys “trump” alloweds, etc. .Know the recommended Security settings AWS has for console users, best practices, which services Amazon is responsible for vs. which ones the customer is responsible for. Know, NACL, vs. Security groups, when and where you use each one. IAM Fundamentals and basic policies are a must.
Last, know all your S3, EC2, EBS, basics – I actually went over all my notes (and class material) from the Arc-Associate Exam, because, yes, there is some overlap.
Sources: I used ACloudGuru Online Courses SysOps class to train, Ryan Kroonenberg is a GREAT instructor; the class is 16 hours – I went through it a couple of times, but this is enough for the foundations only, but not enough to pass the exam. I read the FAQs for all services ( again and again), did any practice questions I could get my hands on – and lab’d things up again and again. Lots of Reading the FAQs on all services.
As a Security Professional, I feel that understanding the interworks of AWS SysOps will aid in securing applications in AWS, protecting the services that run as well as understanding where built-in AWS tools are not enough; and where I might need other vendors to fill the gaps. I was glad the exam hit on some of the Security aspects of AWS – can’t do that enough. 🙂 After both exams, I still feel like the AWS learning is really just beginning.
I hope AWS ( and other vendors ) continue to move away from the MCQ format for certification exams and move toward more what Cisco and RedHat do; the use of emulators for hands-on to test student knowledge.
Thanks for hanging out with me! I hope this helps!
“SOPHOS – Security SOS Botnet Webinar” Write-up by Chris Henson
VERY early last Thursday, I attended the Sophos Security SOS ‘ Botnets – Malware that makes you part of the problem’ Webinar. The Webinar was early because it was hosted late in the day in the UK. The main speaker was Paul Ducklin. Paul knows his stuff when it comes to malware; as do many Engineers at Sophos; as that team has some of the most extensive technical write-ups on malware behavior out there.
As usual, I took notes, so I wanted to share them here:
-BEGIN WEBINAR NOTES –
Info about Botnets:
There is a rise in Bot Builder Tools, semi-custom software packs; where the operator can customize phishing [dropper ] campaigns; and can utilize the bots in a variety of ways. Bots can be customized to report back / call home on specific attributes of the computer on which they take over: Current level of patch installed, Disk space, GPU, memory, enumerate processes running, enumerate security products installed, etc. . .
Web based Botnet consoles have knobs / dials / tools and give out various types of information about the botnet they control in dashboard layout; Geo-location, OS type, etc, target, who reported in, how long ago. .
This data can then be used to conscript the bot into a specific type of botnet:
- if you have infected many machines with high GPU capabilities, then those machines could go to a bit-coin mining botnet.
- if initial infect is a corporate machine; the data about security tool sets in stalled may be valuable to other bad guys.
- if the machines are found that have HUGE diskspace, those machines become part of a storage botnet.
- If you are a average machine, or an IoT, you get conscripted into a DDoS botnet that can be rented out.
Bots – smaller, more basic kits, simply act as downloaders :
- for other kinds of Software, sometimes even “legitimate” ad-ware where companies are paid on each time their ad-ware is installed.
- for more specific botnets, tbd later by the attacker, SPAM, keylogging
- when machine is sold to other bad guy, they decide what to download
- multiple bots [ a machine can be a part of more than one botnet ]
Bots and Ransomware:
After a Bot has exceeded its useful life; the attacker may try to get another $200 – $600 and have the bot’s last job be to install ransomware. The reverse is true. Ransomware can also have extra code that install bots; so even after you pay, the machine is still infected.
Keeping bots off your Computer:
- Patch, patch and patch – reduce the risk surface.
- Remove Flash from your machine [ Adobe Flash has been #1 target of infections ]
- Do not run JAVA in your browser
- Oracle recently modified base JAVA install to run as an App only on machine; and NOT as an applet in their browser
- Things like home router; cameras, IoT, always get the latest vendor firmware
- if device is old, and vulnerable, time top scrap it and get a new one
- Microsoft Sys internal tools set to see process.
- Wireshark Tools
- [ my own note ] Security Onion with BRO, ELSA installed, getting a tapped or spanned feed from suspected machine
Hi friends, I recently passed the AWS Certified Solutions Architect – Associate Exam. Woo Hoo! And although this was a cool accomplishment, I feel that I barely have my toe in the water when it comes to knowledge with in AWS – so I have to keep going! I am knowledge hungry! After tossing a coin to choose between next AWS Exam, Routing specialty or SysOps, the AWS SysOps Exam won out. Here is the first of a series of study sheets for the SysOps.
AWS CloudWatch comes up first on the Monitoring section. As Amazon puts it: “Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources”
CloudWatch is a metrics repository; AWS service places metrics in the repository; and you the AWS user, view statistics from those metrics. Custom metrics are supported.
Namespace is a container for CloudWatch metrics. Metrics in separate namespaces are isolated from one another.
Metric represents a time-ordered set of data points that are published to CloudWatch. Each metric must be marked with a timestamp.
The timestamp can be up to two weeks in the past and up to two days in the future. Timestamps are based on current time in UTC.
CloudWatch retains metric data:
- Data Points gathered every 60 seconds are available for 15 days.
- Data Points gathered every 300 ( 5min) are saved for 63 days
- Data Points gathered every 3600 sec ( 1 hr ) are saved for 455 days (15 months )
CloudWatch is also supports the concept of alarms, derived from the metrics in the repository. “An alarm watches a single metric over time; and performs one or more actions, based on the value of a metric threshold over time.” Alarms create actions when a service is in a specific state for a sustained period of time.
Dimensions are name/value pairs that identifies a metric. AWS services that send data to CLoudWatch attach dimensions to each metric. Dimensions are used to filter results, Example, you can get stats for an EC2 instance by calling the ‘InstanceId’ Dimension. You can assign up to ten dimensions to a metric.
Statistics are metric data aggregations over time. CloudWatch provides statistic based on the metric data points provided by custom data / or by other services in AWS. Aggregations are made using the namespace, metric name, dimensions and the datapoint of each unit of measure, within the time you call out. [ min, Max, Sum, Average ] Each static has a unit of measure. If you do not specify, CloudWatch uses None as the unit.
Period is the length of time associated with a specific AWS CloudWatch statistic.
- Alarms: 10 month/ customer free 5000 per region per account.
- API requests 1,000,000 / month / customer free
- Dimensions: 10 per metric
- Metric Data: 15 months
4 Standard [default ] CloudWatch Metrics are:
- CPU, Disk, Network and Status Checks
Memory Metrics statistics are NON- standard / Non -default on Cloudwatch
Two types of status checks:
- System Status Checks [ for underlying physical host ] [ start /stop VM to resolve ]
- Instance Status Checks [ for actual VM ] [ reboot instance to resolve ]
EBS Monitoring on Cloudwatch
Two types of Monitoring for EBS
- Basic: 5-minute periods at no charge. This includes data for the root device volumes for EBS-backed instances.
- Detailed: Provisioned IOPS SSD (
io1) volumes automatically send one-minute metrics to CloudWatch
EBS sends data to CloudWatch, several metrics for these storage types:
- Amazon EBS General Purpose SSD (gp2), # the () denotes the API name
- Throughput Optimized HDD (st1)
- Cold HDD (sc1) volumes automatically send five-minute metrics to CloudWatch
- Magnetic (standard) volumes automatically send five-minute metrics to CloudWatch.
- Provisioned IOPS SSD (io1) volumes automatically send one-minute metrics to CloudWatch. # SUPER fast, high IOPS!
Specific EBS metric names are here – with special emphasis on VolumeQueueLength: “The number of read and write operation requests waiting to be completed in a specified period of time”. If this increments, your disk IOPs may need increase.
Two Volume status metrics to which you should pay attention:
- warning: means: “Degraded (Volume performance is below expectations) Severely Degraded (Volume performance is well below expectations”
- impaired means: “Stalled (Volume performance is severely impacted) Not Available (Unable to determine I/O performance because I/O is disabled)” Your Volume is basically hosed!
EBS Burst Balance Percent Metric is described here and here are my notes:
- General Purpose SSD (gp2) EBS volumes have a base of 3 IOPS per GiB of Volume size, Max Volume of 16,384 GiB and Max Burstable IOPS size of 10,000 [ if you exceed this, you need to move to a Provisioned IOPS SSD (io1) ]
Cloud Architect Dariusz Dwornikowski describes the i/o credit concept for burst balance very well in his blog. ” think of I/O credits as of money a disk needs to spend to buy I/O operations (read or write). Each such operation costs 1 I/O credit.When you create a disk it is assigned given an initial credit of 5.4 million I/O credits. Now these credits are enough to sustain a burst of highly intensive I/O operations at the maximum rate of 3000 IOPS (I/O per second) for 30minutes. When the balance is drained, we are left with an EBS that is totally non-responsive.”
Pre-Warming EBS – Initializing a snapshot reading all blocks before you use it for best performance .
- Per Amazon: “Amazon Relational Database Service sends metrics to CloudWatch for each active database instance every minute. Detailed monitoring is enabled by default.”
- In RDS itself, you monitor RDS by EVENTS
- In CloudWatch you monitor RDS by Metrics
Two metric to which you should pay close attention in RDS:
- Replica Lag “The amount of time Read Replica DB instance lags behind source DB [ SQL, Maria, PostGRESQL ] “
- DiskQueueDepth “The number of outstanding IOs (read/write requests) waiting to access the disk”
- ELB only reports metrics only when there is traffic. Or as Amazon puts it “If there are requests flowing through the load balancer, Elastic Load Balancing measures and sends its metrics in 60-second intervals. If there are no requests flowing through the load balancer or no data for a metric, the metric is not reported.”
- HealthyHostCount metric “he number of healthy instances registered with your load balancer.”
- Other useful counters are statis that backend pool members send:
Which metrics should I monitor? #is the source for the below information:
- Metrics for Memcached CPUUtilization – This is a host-level metric reported as a percent. For more information, see Host-Level Metrics. Since Memcached is multi-threaded, this metric can be as high as 90%. If you exceed this threshold, scale your cache cluster up by using a larger cache node type, or scale out by adding more cache nodes.
- SwapUsage: This metric should not exceed 50 MB. If it does, we recommend that you increase the ConnectionOverhead parameter value.
- Metrics for Redis:
- CPUUtilization – Redis is single-threaded, the threshold is calculated as (90 / number of processor cores). For example, suppose you are using a cache.m1.xlarge node, which has four cores. In this case, the threshold for CPUUtilization would be (90 / 4), or 22.5%.
- SwapUsage: No recommended setting with Redis, you can only scale out
Evictions This is a cache engine metric, published for both Memcached and Redis cache clusters. We recommend that you determine your own alarm threshold for this metric based on your application needs.
- Memcached: If you exceed your chosen threshold, scale you cache cluster up by using a larger node type, or scale out by adding more nodes.
- Redis: If you exceed your chosen threshold, scale your cluster up by using a larger node type”
I had the amazing honor of getting front row for Brian Krebs’ KeyNote speech at the SailPoint Navigate Conference Last week in Austin, TX! Brian is an exceptional a Public Speaker, just as he is an exceptional writer. Krebs has been my teacher for a few years now (extensive reading and studying of his blog: https://krebsonsecurity.com ) During the Keynote, he captivated the audience by highlighting what he has learned in his experiences. I wrote as fast as I could by hand in my notebook, tried to capture as much of it as I could; and put it all together here:
- Authentication and Identity Compromises are why there are so many Security breaches; the attacker essentially becomes the user with stolen, compromised credentials
- Weakest part of the organization is the farthest point out – the users
- “Everyone gets pen-tested whether or not they pay for it” < that is so true!
- Most breaches in the last decade, the org has had no clue the attacker was on their Network.
- Security Awareness Training is still an effective method to help mitigate breaches.
- We have no business using “static identifiers” in 2017! How do we get better?
- Two Factor can blunt many attacks! Industry relies on tools too much, need to rely more on human to interpret the tools. Target had tools, but people could not make sense of what they were getting.
- Trained, Sec Ops to do basic ‘block and tackle’ , curious human beings to look at tool output needed to find the bad guys.
- Build a solid SecOps team ( If orgs cut back on Security people, their visibility decreases.)
- Mitigate Account Take-over [ e.g., using your same creds across multiple web services ]; credential replay can be done by bots at a slow rate to avoid SecTool detection; need a human eye on the screen.
Krebs then changed up topics to predictions:
- Ransomware attacks may become more targeted and attackers will better understand the data ( and the value of that data ) which they are encrypt so they can ask a proper ransom for it.
- IoT – will be a major challenge. Shodan lists all kinds of targets. Krebs’ site was DDoS’d [ 620 Gbps ] by a massive Botnet consisting of IoT devices; expect this trend to continue.
- Potentially more disruptive attacks [ WannaCry ]
More Solutions outlined:
- Get beyond Compliance; don’t just meet the audit; go further
- Invest in 2FA everywhere!
- Do your back-ups correctly, don’t leave them open, or exposed!
- Drills exercises; red team vs. blue team so your team will be ready and can run the playbook!
- Secure what you have
- Watch out for vendor ‘kool-aid’ that their tools can replace people, simply not true!
- Strengthen and invest in current employees
- Assume you are compromised
- Watch out for your business partners
After the speech was over; he wanted to stay up and answer questions for the audience; unfortunately, the vendor rushed him off stage so some c-level person could speak, ( but not before I got to shake his hand and thank him for all his work and how much he has helped me professionally )! thank you, Brian ! It was great to finally meet you!