Empower your career – Break free of the silos!

Yes, sometimes I go off Security topic to touch on other things that interest me, especially when I feel my experience can be of value to you. For a while now, I have had in the back of my mind that working in a strict silo’d org ( in a technology field ) is a recipe for career stagnation, but most of the big Fortune 500 operate in silos. I have worked for a few of the  F500’s – and had to figure out how to thrive and grow in some tough cultures. Here are some ideas to empower you, based on my experience.

Yes – I fought hard against ‘silo-zation’ in my previous roles over the years – and grew. You can too.

One: Spend a  portion of your time away from work learning new technology, consistently. I bought and used my own gear specifically for this purpose. I built out my own learning environment on various laptops with all of the software tools I needed, since Corporate Policies prohibited me from installing software on my work laptop, (rightfully so). I used purchased subscriptions (and leverage company subscriptions) to various online tech tutorial sites, Lynda.com, Safari.com and acloud.guru and others over the years. I engaged in lots of hands-on labs, programming exercises and listen to lessons again and again in the evenings. I still do! 

Two: Go to local vendor or community sponsored Meet-ups around the city. Again, these usually happen after hours, but once in a while, I was able to convince my supervisor to let me go to ‘Security’ related seminars even though it was not about a product my team supported.  These meetings were priceless – I would meet new people, make great connections and sit in on tech I would never get to see at work.

Three: Be Certification hungry. In 2017, I renewed my CCNP, achieved two AWS Certifications and renewed my GIAC GPEN. It sounds like a lot, but if spread out over a year it is not – that kind of effort just takes commitment + some time set aside each day to make progress toward goals. Learning. Never. Stops.

Everyone has their own opinion on Certifications, but I believe mine empowers.

A. Certifications are a GREAT learning tool. Most certifications have a solid syllabus to follow if you want to learn new tech with plenty of hands-on labs.

B. If I ever found myself without work through no fault of my own, I would rather have certifications, ( especially since I support others on my income ), to make me a more attractive candidate.

C. Certifications do not replace experience or the knowledge you get from troubleshooting real live problems.

Four: Work hard and persist to get on other projects outside of your space. Around the time I began in my prior role, I also had a serious interest in all things ‘Cloud’, so I invited myself to my previous company’s Cloud Design Meetings.  Through active attendance, participation and a sincere desire to help, I ended up becoming official and being a go to for Security for that solution because I understood and could demo the technology.

Five: Lead the Lunch and Learns. One of the best ways to learn new tech is to teach it and demo it to others.  I have done several self-initiated lunch and learns on tech outside of my space and it was fun! My invite list extended to people in other teams who also needed a breath of fresh tech air.

Six:  Don’t be afraid to change jobs. Case in point –  I recently left my last role, a tough choice, yes, but portion of that reason was due to the fact I was in a silo.  Although many people have given me flack for switching jobs – I have found all my moves have empowered my career and led me to great opportunity.

Breaking away from supporting single technology and moving to having a deep diverse skill set that spans multiple disciplines is an incredible asset! It makes you more valuable, it makes you better at troubleshooting – and well… it just makes you feel darn good!

I encourage you to get out of that silo and seek roles where you can get your hands on all of the types of tech you desire! No one will manage your career better than you. Take charge – make it happen!


Posted in Uncategorized | Leave a comment

How to protect yourself from most hacks

A very good friend of mine was very concerned about the recent news release on Kaspersky and removed their software from his computer. Probably a good idea to remove Kaspersky, but if you are doing that, there are some bigger, perhaps more effective actions you can take to protect yourself against the bad guys.

First, it is important to understand the types of threats that can manifest against individual human targets and take action to the reduce risk of those threats manifesting. The biggest threats to individual consumers are, one Identity Theft, followed closely by Credit card number theft and in third place; Fraudulent money transfers – so if you want to tighten your ‘security belt’, there are some actions you can do now that are really are low hanging fruit:
* Rotate all your passwords for all your accounts every 90 days ( or less )
* Do not re-use any of your passwords across any site.
* Use two factor on sites that support it, ( eg site texts you a pin to use as part of the login password )
* Use complex passwords or pass phrases where sites allow it-
* Keep a fraud alert on your credit at all times, ( unless you are buying something big ) I think now you can actually freeze your credit completely thanks to Equifax 😦 – or if freezing your credit is distasteful …. utilization of a credit monitoring service can let you know if bad guys are trying to do something they should not be-
* Always watch your bill for teeny tiny small charges on debit and credit cards – that’s how the bad guys test card numbers they get –
And for bonus level security  – you can chose not to use your main debit card that has direct access to your main bank account for daily transactions – Instead, either use a credit card ( and treat it like a debit card ) for the day to day stuff or move your money to an account that has no public facing extensions used for transactions- The fewer people hat have info on your personal bank – the better –
Theory is this:  if someone gets my credit card and charges that up, it’s a whole lot better than if someone drained my personal bank – so I don’t have my debit card number in every restaurant, Target, Home Depot database  – because those places get hacked –
By all means I do not mean to imply those are ALL the threats to individuals; and that nation states would not have use for hacked consumer resources, ( like your computer and IoT devices as part of a botnet army ), I am just recommending easy quick hits you can do to make it harder for the bad guys.
Stay safe friends!
Posted in Uncategorized | Leave a comment

Who is talking to Alexa?

It seems like only a week after I wrote about the need for Voice Authentication on Alexa, Engadget published this article about how Siri and Alexa are Vulnerable to nefarious commands.  Basically researchers were sending commands in ultra high frequencies and getting the Electronic Personal assistants to respond.

In perhaps a related mystery, often times during a movie or a TV show, I will observe where Alexa will wake up and respond to the movie, usually with “I don’t know about that” when none of the dialog said anything that resembled ‘Alexa’. Makes me wonder about how long people have known about the vulnerability that Engadget found, and have been exploiting it in front of our very ears.

Hmmm. Let’s take it a step further… what if there were a special Alexa skill, where it was coded such that Alexa did not verbally respond, but could kick off a function in the background. . . it is possible. You then could secretly tell an Alexa to execute a task with your recorded ultra high frequency command, and then Alexa ‘could’ quietly execute the task, all without knowledge of the Alexa owner.

Until I can get voice authentication for my Echo – Alexa won’t be hooked up to anything that can cause too much havoc. I don’t want to be watching ‘Ghost in the Shell’ and have a case of canned unicorn meat from Amazon show up on my front porch the next day.

Posted in Uncategorized | Leave a comment

Thinking about Information Security in 2018 and beyond

I have been thinking about the direction Information Security will take these next five years. The pace at which change is occurring in the daily lives of human beings is simply astounding; with new technology making its way into our homes and garages with unprecedented speed and ease. So, here are some things to think about when it comes to the next generation of Information Security.

Security of Electronic Personal Assistants ( think Alexa, Siri, OK Google and whatever Samsung is building).  As new “skills”/ features/capabilities get added to Electronic Assistants, ( presently on Amazon, there are 12,000 plus and growing for Alexa ), the more they will be able to do, thus the potential for an effective hack goes up as well. Another way of putting it, the more aspects a of our lives a Digital Assistant can affect, the deeper or more intrusive a hack can be.

Much of the focus around security for Digital Assistants has been in the development, (secure APIs, signed requests from API to cloud service, and again, in Amazon, Security using IAM Roles to determine what resources an API call can access). Although this is good, it leaves out a large piece of the pie, and that is: authorization, ANYONE can talk to my Alexa, Google Assistant or Siri and get them to perform tasks. Burger King already did this by hijacking Google Assistant.

I am aware that some major banks have been working to gain an edge and build Alexa skills that help sell their services, things like signing up for new credit card offers, being able to book a trip on your new bankcard. Electronic Assistants also have access to our calendar, they can call for emergency services, order a Lyft, post to Slack, check your bank balance, turn your lights on and off, activate your Roomba – you get the idea – Electronic Assistants can do things that create a physical action in the real world. 

So, really then, all someone would have to do to have access to all of the contexts of your life which your Electronic Assistant controls, is, one- break in to your house, or two, set up their own Electronic Assistant on your account from wherever they are. One method requires bypassing a $40 dollar quickset lock, the other requires your knowing the name and password of the service hosting your Electronic assistant.

To solve this, our Electronic assistants will need to know our voices, and only take commands from their authorized owners.  Combine this with Two Factor, maybe an RFID ring you could wear, or a verbal passphrase that would “open” your Electronic assistant for 5 minutes so you can command it. Vendors could also program location into them; and allow them only to work from inside your home – cutting off the chance of someone setting one up in another part of the world.

The second topic where I see Information Security will be needed is:

Security of Autonomous Vehicles

There was a revelation lately that you can confuse a self-driving car by placing stickers on a stop sign 

I am certain that example is just one of thousands of methods which could be performed in the real world to disrupt the sensors and programing on Autonomous cars. My mind can come up with all kinds of other ways:  additional yellow stripe paint on the road, the use of high intensity light (laser pointers) to interfere with cameras on cars, placing a mannequin in the road to stop an autonomous car indefinitely, create a condition of conflict which the programmers did not anticipate, red and green lights at the same time, driving with a silhouette cardboard person attached to the front of the car. You get the idea. Autonomous vehicles will be easy to disrupt. ( at least the first few generations anyway )

Again, first thing that comes to mind in thinking about solutions is using some kind of two-factor method here may prove helpful – provide a second method of verifying whether a condition is true or false. For the stop sign, in combination of recognition of an octagon shape with letters, perhaps the car could pre-verify the existence of the sign by querying an enhanced map database that has citywide sign locations and meaning, ( yes, I know that can be hacked too, but difficulty goes up from slapping a sticker on the sign to a full fledge DB hack – ) Add another factor, make the signs, stoplights, guardrails ” intelligent ” and they could have their own unique electronic signature and communicate with the car. Failsafes are a given when conditions are not met. Humans could also be a second factor when the machine cannot verify if a condition is true, the car could query you. ( although that does not instill the kind of faith car companies want people to have in autonomous cars).

Last, use machine learning as a solution.  Most of us drive to the same places everyday; and the car could be taught all that it needs to know from the route the first few times through. This method, with a combination of other methods would make autonomous cars more safe for human beings. So… for all you security die hards out there, YES, all of these solutions can be hacked. The level of complexity rises, however. I am only suggesting methods to make autonomous cars MORE secure and safe, fully recognizing that the only safe way to not have your car hacked is to buy a car from 1983 or before; or if you are really paranoid, roll 19th century style with horse and buggy, top hat and monocle.

Stay Safe! Stay Secure!

Posted in Uncategorized | Leave a comment

Passed AWS SysOps Associate Exam!

Hi! I passed the AWS SysOps Associate exam, so I wanted to spend a few minutes and give my thoughts on it. I can’t give any data on actual questions, because that breaks the AWS NDA, but I do have some personal insight I will share.

As a Multiple Choice Question exam, with no exhibits, screens shots or AWS CLI emulators – I found the AWS SysOps exam experience to diverge somewhat from the actual experience of using the AWS console and CLI in real life. That’s not saying you don’t have to know your stuff, because you do – I am saying that if the test authors spent the same amount or more time incorporating console screen grabs (or using an emulator to have the test taker type in AWS CLI commands), as they did on pure word trickery, this exam would truly be great. As the exam stands, you have to be good at both using AWS console and CLI as well as using your mind to abstract the AWS GUI experience into the written word MCQ format.

The exam was heavy on Autoscaling. In fact it could have been called the AWS auto-scaling exam. Many different scenarios were presented and the best Scaling solution had to be selected. Know Connection draining and Load Balancing in and out – I was hit hard on those!

Second, the exam was heavy on CloudWatch, (as it should be), know all your CW metrics, which services have the 1 min metrics by default, CW namespaces, etc . . . This makes sense, as a good SysOps person knows where to get, and how to read logs. Again, I can’t mention the importance of knowing CloudWatch inside and out. Know the Cloud Watch API calls. Read the CloudWatch FAQs.

Third, know your VPCs, Routing and Security tools. Know which subnet is the “main” when you use the VPC wizard to make a VPC with Public and Private subnets. Know which resources the VPC wizard spins out for each of the four Wizard types; and know if you can delete those resources in each instance. (To study for this I did labs of each, a few different times and tried deleting things, and noted what was left). Know when you need to use the routing table; what it does and where you need it – I had a few scenarios where I was asked about which routes go where; so yeah.. you need to know routing.

Ahh yes, the Security tools. Know your SSE-C for S3, how it works with the API, what the SSE-C API sends in each call, etc . .  Know the Bucket Policies in and out, know how to READ JSON bucket polices and what they do; when denys “trump” alloweds, etc. .Know the recommended Security settings AWS has for console users, best practices, which services Amazon is responsible for vs. which ones the customer is responsible for. Know, NACL, vs. Security groups, when and where you use each one. IAM Fundamentals and basic policies are a must.

Last, know all your S3, EC2, EBS, basics – I actually went over all my notes (and class material) from the Arc-Associate Exam, because, yes, there is some overlap.

Sources: I used ACloudGuru Online Courses SysOps class to train, Ryan Kroonenberg is a GREAT instructor; the class is 16 hours – I went through it a couple of times, but this is enough for the foundations only, but not enough to pass the exam.  I read the FAQs for all services ( again and again), did any practice questions I could get my hands on – and lab’d things up again and again. Lots of Reading the FAQs on all services.

As a Security Professional, I feel that understanding the interworks of AWS SysOps will aid in securing applications in AWS, protecting the services that run as well as understanding where built-in AWS tools are not enough; and where I might need other vendors to fill the gaps. I was glad the exam hit on some of the Security aspects of AWS – can’t do that enough. 🙂 After both exams, I still feel like the AWS learning is really just beginning.

I hope AWS ( and other vendors ) continue to move away from the MCQ format for certification exams and move toward more what Cisco and RedHat do; the use of emulators for hands-on to test student knowledge.

Thanks for hanging out with me! I hope this helps!



Posted in AWS, AWS Certified Solutions Architect | Leave a comment

SOPHOS – Security SOS Botnet Webinar Write-up

“SOPHOS – Security SOS Botnet Webinar” Write-up by Chris Henson

VERY early last Thursday, I attended the Sophos Security SOS ‘ Botnets – Malware that makes you part of the problem’ Webinar.  The Webinar was early because it was hosted late in the day in the UK. The main speaker was Paul Ducklin. Paul knows his stuff when it comes to malware; as do many Engineers at Sophos; as that team has some of the most extensive technical  write-ups on malware behavior out there.

As usual, I took notes, so I wanted to share them here:


Info about Botnets:

There is a rise in Bot Builder Tools, semi-custom software packs; where the operator can customize phishing [dropper ] campaigns; and can utilize the bots in a variety of ways. Bots can be customized to report back / call home on specific attributes of the computer on which they take over: Current level of patch installed, Disk space, GPU, memory, enumerate processes running, enumerate security products installed, etc. . .

Web based Botnet consoles have knobs / dials / tools and give out various types of information about the botnet they control in dashboard layout; Geo-location, OS type, etc, target, who reported in, how long ago. .

This data can then be used to conscript the bot into a specific type of botnet:

  • if you have infected many machines with high GPU capabilities, then those machines could go to a bit-coin mining botnet.
  • if initial infect is a corporate machine; the data about security tool sets in stalled may be valuable to other bad guys.
  • if the machines are found that have HUGE diskspace, those machines become part of a storage botnet.
  • If you are a average machine, or an IoT, you get conscripted into a DDoS botnet that can be rented out.

Bots – smaller, more basic kits, simply act as downloaders :

  • for other kinds of Software, sometimes even “legitimate” ad-ware where companies are paid on each time their ad-ware is installed.
  • for more specific botnets, tbd later by the attacker, SPAM, keylogging
  • when machine is sold to other bad guy, they decide what to download
  • multiple bots [ a machine can be a part of more than one botnet ]

Bots and Ransomware:

After a Bot has exceeded its useful life; the attacker may try to get another $200 – $600 and have the bot’s last job be to install ransomware.  The reverse is true. Ransomware can also have extra code that install bots; so even after you pay, the machine is still infected.

Keeping bots off your Computer:

  • Patch, patch and patch – reduce the risk surface.
  • Remove Flash from your machine [ Adobe Flash has been #1 target of infections ]
  • Do not run JAVA in your browser
    • Oracle recently modified base JAVA install to run as an App only on machine; and NOT as an applet in their browser
  • Things like home router; cameras, IoT, always get the latest vendor firmware
    • if device is old, and vulnerable, time top scrap it and get a new one

Detecting bots:

  • Microsoft Sys internal tools set to see process.
  • Wireshark Tools
  • [ my own note ] Security Onion with BRO, ELSA installed, getting a tapped or spanned feed from suspected machine


Posted in Uncategorized | Leave a comment

AWS Certified SysOps Administrator – Associate: Study Sheet – Monitoring Section

Hi friends, I recently passed the AWS Certified Solutions Architect – Associate Exam. Woo Hoo! And although this was a cool accomplishment, I feel that I barely have my toe in the water when it comes to knowledge with in AWS – so I have to keep going! I am knowledge hungry! After tossing a coin to choose between next AWS Exam,  Routing specialty or SysOps, the AWS SysOps Exam won out. Here is the first of a series of study sheets for the SysOps.

AWS CloudWatch  comes up first on the Monitoring section.  As Amazon puts it:  “Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources”

CloudWatch is a metrics repository; AWS service places metrics in the repository; and you the AWS user, view statistics from those metrics. Custom metrics are supported.

Namespace is a container for CloudWatch metrics. Metrics in separate namespaces are isolated from one another.

Metric represents a time-ordered set of data points that are published to CloudWatch. Each metric must be marked with a timestamp.

The timestamp can be up to two weeks in the past and up to two days in the future.  Timestamps are based on current time in UTC. 

CloudWatch retains metric data:

  • Data Points gathered every 60 seconds are available for 15 days.
  • Data Points gathered every 300 ( 5min) are saved for 63 days
  • Data Points gathered every 3600 sec ( 1 hr ) are saved for 455 days (15 months )

CloudWatch is also supports the concept of alarms, derived from the metrics in the repository. “An alarm watches a single metric over time; and performs one or more actions, based on the value of a metric threshold over time.” Alarms create actions when a service is in a specific state for a sustained period of time.

Dimensions are name/value pairs that identifies a metric. AWS services that send data to CLoudWatch attach dimensions to each metric. Dimensions are used to filter results, Example, you can get stats for an EC2 instance by calling the ‘InstanceId’ Dimension. You can assign up to ten dimensions to a metric.

Statistics are metric data aggregations over time. CloudWatch provides statistic based on the metric data points provided by custom data / or by other services in AWS. Aggregations are made using the namespace, metric name, dimensions and the datapoint of each unit of measure, within the time you call out.  [ min, Max, Sum, Average ] Each static has a unit of measure. If you do not specify, CloudWatch uses None as the unit. 

Period is the length of time associated with a specific AWS CloudWatch statistic.

CloudWatch Limits:

  • Alarms: 10 month/ customer free 5000 per region per account.
  • API requests 1,000,000 / month / customer free
  • Dimensions: 10 per metric
  • Metric Data: 15 months

4 Standard [default ] CloudWatch Metrics are:

  • CPU, Disk, Network and Status Checks

Memory Metrics statistics are NON- standard / Non -default on Cloudwatch

Two types of status checks:

  • System Status Checks [ for underlying physical host ] [ start /stop VM to resolve ]
  • Instance Status Checks [ for actual VM ] [ reboot instance to resolve ]

EBS Monitoring  on Cloudwatch

Two types of Monitoring for EBS

  • Basic: 5-minute periods at no charge. This includes data for the root device volumes for EBS-backed instances.
  • Detailed: Provisioned IOPS SSD (io1) volumes automatically send one-minute metrics to CloudWatch

EBS sends data to CloudWatch, several metrics for these storage types:

  • Amazon EBS General Purpose SSD (gp2),  # the () denotes the API name
  • Throughput Optimized HDD (st1)
  • Cold HDD (sc1) volumes automatically send five-minute metrics to CloudWatch
  • Magnetic (standard) volumes automatically send five-minute metrics to CloudWatch.
  • Provisioned IOPS SSD (io1) volumes automatically send one-minute metrics to CloudWatch. # SUPER fast, high IOPS!

Specific EBS metric names are here – with special emphasis on VolumeQueueLength: “The number of read and write operation requests waiting to be completed in a specified period of time”. If this increments, your disk IOPs may need increase.

Two Volume status metrics to which you should pay attention:

  • warning:  means: “Degraded (Volume performance is below expectations) Severely Degraded (Volume performance is well below expectations”
  • impaired means: “Stalled (Volume performance is severely impacted) Not Available (Unable to determine I/O performance because I/O is disabled)” Your Volume is basically hosed!

Burst Balance

EBS Burst Balance Percent Metric  is described here and here are my notes:

  • General Purpose SSD (gp2) EBS volumes have a base of 3 IOPS per GiB of Volume size, Max Volume of 16,384 GiB and Max Burstable IOPS size of 10,000 [ if you exceed this, you need to move to a Provisioned IOPS SSD (io1) ]

Cloud Architect Dariusz Dwornikowski describes the i/o credit concept for burst balance very well in his blog. ” think of I/O credits as of money a disk needs to spend to buy I/O operations (read or write). Each such operation costs 1 I/O credit.When you create a disk it is assigned given an initial credit of 5.4 million I/O credits. Now these credits are enough to sustain a burst of highly intensive I/O operations at the maximum rate of 3000 IOPS (I/O per second) for 30minutes. When the balance is drained, we are left with an EBS that is totally non-responsive.”

Pre-Warming EBS – Initializing a snapshot reading all blocks before you use it for best performance .

RDS Monitoring 

  • Per Amazon: “Amazon Relational Database Service sends metrics to CloudWatch for each active database instance every minute. Detailed monitoring is enabled by default.”
  • In RDS itself, you monitor RDS by EVENTS
  • In CloudWatch you monitor RDS by Metrics 

Two metric to which you should pay close attention in RDS:

  • Replica Lag “The amount of time Read Replica DB instance lags behind source DB [ SQL, Maria, PostGRESQL ] “
  • DiskQueueDepth “The number of outstanding IOs (read/write requests) waiting to access the disk”

Elastic Load Balancer Metrics for CloudWatch

  • ELB only reports metrics only when there is traffic. Or as Amazon puts it “If there are requests flowing through the load balancer, Elastic Load Balancing measures and sends its metrics in 60-second intervals. If there are no requests flowing through the load balancer or no data for a metric, the metric is not reported.”
  • HealthyHostCount metric “he number of healthy instances registered with your load balancer.”
  • Other useful counters are statis that backend pool members send:  HTTPCode_Backend_2XX,HTTPCode_Backend_3XX,HTTPCode_Backend_4XX,HTTPCode_Backend_5XX

ElasticCache Monitoring CloudWatch

Which metrics should I monitor?   #is the source for the below information:

  • Metrics for Memcached  CPUUtilization – This is a host-level metric reported as a percent. For more information, see Host-Level Metrics. Since Memcached is multi-threaded, this metric can be as high as 90%. If you exceed this threshold, scale your cache cluster up by using a larger cache node type, or scale out by adding more cache nodes.
  • SwapUsage: This metric should not exceed 50 MB. If it does, we recommend that you increase the ConnectionOverhead parameter value.


  • Metrics for Redis:
  • CPUUtilization Redis is single-threaded, the threshold is calculated as (90 / number of processor cores). For example, suppose you are using a cache.m1.xlarge node, which has four cores. In this case, the threshold for CPUUtilization would be (90 / 4), or 22.5%.
  • SwapUsage: No recommended setting with Redis, you can only scale out


         Evictions This is a cache engine metric, published for both Memcached and Redis cache clusters. We recommend that you          determine your own alarm threshold for this metric based on your application needs.

  • Memcached: If you exceed your chosen threshold, scale you cache cluster up by using a larger node type, or scale out by adding more nodes.
  • Redis: If you exceed your chosen threshold, scale your cluster up by using a larger node type”
Posted in AWS | Leave a comment