AWS Certified Architect Associate S3 Study Sheet

AWS S3

Amazon S3 is durable, scalable Cloud object Storage based on key-value pair . Number of objects you can store is unlimited; largest object size is 5 TB; largest single PUT is 5 GB.

A Bucket is a logical container for objects stored on S3; simple flat folder with no system hierarchy. ( however there is a logical hierarchy, like [ Folder/File ] Objects in AWS S3 buckets are automatically replicated on multiple devices in multiple facilities within a region.  100 buckets per account by default. Buckets have unique names. Can be 63 characters. Prefixes and delimiters may be used in key names. Data is managed as objects using an API; buckets can host STATIC web content only. S3 buckets support SSL encryption of data in transit and data at rest.

AWS Objects are private by Default and only accessible to the owner

AWS S3 Storage Classes

Standard S3 Storage [ default Storage Class ] is 99.9999999%Durability and 99.99% Available [ don’t confuse the two ] Low Latency and high throughput.  Supports SSL in transit and at rest.  Supports LifeCycle Management for migration of Objects.

Standard IA Storage ( Infrequently Accessed ) is optimized for long lived and infrequently accessed data. 99.9999999% Durability and 99.90% Availability. Min object size 128KB and greater than 30 days  Ideal for long term storage, backups and as a data store for DR. Supports SSL in transit and at rest.  Supports LifeCycle Management for migration of Objects.

Reduced Redundancy Storage ( RRS ) is optimized for non-critical, reproducible data,that is stored at lower levels of redundancy. Reduced Storage Cost. 99.99% Durable and 99.99% Available. 99.99% Durability and 99.99% Availability. Designed to sustain the loss of a single facility.

Use Cases: ( thumbnails, transcoded media or other processed data that can be reproduced easily )

Amazon Glacier is optimized for Data Archiving. Extremely Low Cost. Retrieval can be up to several hours/ Vault Lock feature enforces compliance via a lockable key. [ 3 -5 hour retrieval ]

Glacier Uses cases include: [ Media Asset Archiving, Healthcare Information Archiving, Scientific Data Storage, Digital Preservation, Magnetic Tape Replacement ].  You can restore up to 5% of your data for free each month [ you can set up data retrieval policy to eliminate going over free-tier ]

Glacier as a Standalone Service: Data is stored in encrypted archives that can be as large as 40TB Vaults are containers for Archives and vaults can be locked for compliance

All Classes Support AWS Life Cycle Management Polices; transition to different class; and expiration of objects.

S3 supports Multi Factor  (MFA ) Delete – to protect form accidental deletes. MFA Delete can only be enabled by the root account.

S3 DATA CONSISTENCY MODELS

  • S3 provides for read after write consistency for PUTS of new objects
  • S3 provides for eventual consistency for overwrite PUTS and DELETES

AWS S3 Security

Bucket Policies – JSON language to create; you can grant permissions to users [ allow / deny ] to perform specific actions all or part of objects in buckets. Broad Rules across all requests. Can restrict http referrer or IP

Bucket ACL – Grant specific permissions R, W and Full_control to specifc users

IAM Policy – grant IAM users fine-grained control to thier S3 buckets while maintaining full control of everything else

Encryption

SSE-S3 Keys – Check box style encryption solution where AWS is responsible for key mgmt and key protection. All objects are encrypted with a unique key. The actual object is then encrypted further by a separate master key.; a new mater key is issued monthly; with AWS rotating keys.

SSE-KMS – Amazon handles key mgmt and protection for S3, but you manage the keys. Separate permissions for master key; and provides auditing so you can see who access the object with the key.; allows you to view any failed attempts.

SSE-C Used when customer wnats to maintain keys; but does not want to maintain an encryption library. AWS will encrypt and decrypt objects; while customer maintains full control of keys.

Client Side Encryption

This is sued when you want to encrypt data BEFORE sending it to AWS S3. Client has most control; maintains end-to-end control of encryption process. You have two options:

Use an AWS KMS managed customer master key

Use a client side master key

Pre-Signed URLs: Use Pre-signed URLS for time-limited download access

Bucket Versioning

Allows you to preservice, retrieve and restore every version of every object stored in the bucket. Once version is turned on; it cannot be turned off, only disabled.

Cross Region Replication

Allows you to asynchronously replicate all new objects in the source host bucket in one AWS Region to a target bucket in another region. And metadata and ACL associated with the object are part of the replication. If CRR is enabled while objects are in the bucket; it won’t affect those; only NEW objects.  Versioning must be enabled for both source and destination buckets for CRR to work + you must have an IAM policy to give permission to replicate objects on your behalf.

Event Notifications and Logging

Event notifications are set at the bucket level and can trigger a message in Amazon SNS or SQS or store an action in AWS Lambda in response to an upload or delete of an object (by PUT, POST, COPY, DELETE  or multipart upload completion). You can configure Event notifications through the S3 console, trough REST API or by using Amazon SDK.

Logging is off by default. When you enable logging for the source bucket; you must choose a target bucket.

Encrypt your AWS API Key with GPG

In AWS, when you create a user in IAM and you give that user ‘programatic’ access, AWS will give you that user’s API key. there are two major rules one must follow with the API key.

  1. NEVER hard code your API key into your code.
  2. Never store your API unencrypted.

To help with #2, in Linux you can just use GPG

First install it, for Ubuntu:

    sudo apt-get install gnupg2 -y

#or for RHEL:/Centos

 

    yum install gnupg2

 

and then just run it against the text file where your API keys are:

  1. Encrypt the file with the command
    gpg -c API.txt
  2. Enter a unique password for the file and hit Enter.
  3. Verify the newly typed password by typing it again and hitting Enter.

 

      4. Looking at the output of an an ls -hal the original file is still there; so

    rm -rf API.txt

      5. When ready, Decrypt the file with the command

    gpg API.txt

Crazy Ransomware based on NSA tool spreads across the Globe!

The Security Defenders are working overtime today to stop WCry [ WannaCry ]. Right now, 45,000 attacks of the WannaCry ransomware are reported in 74 countries around the world. This is pretty bad, there are reports of Hospitals being shut down due to this, Service Providers shutting down their computers and many other reports out there of companies affected.

According to Kaspersky, WCry is leveraging an SMBv2 Remote code execution, derived from the NSA Tool kit.    Here is the US-CERT Confirmation of WCry.

MS17-010 is the Patch MS released in April  to address the vulnerability this piece of ransomware is using and if this is not on your system; you are vulnerable.

Close Firewall ports 445/139 & 3389.

Here is a Live MAP at Malware Tech, monitoring the Spread

Kaspersky’s Global Lab has a Solid Write Forensic Up

Fig 1, Machine infected with WCry  Image via Twitter Malware Hunter Team.

 

Big Box Electronics Retailer has Vulnerable Cisco IP phones connected to their POS systems

You only have to go as far as your neighborhood electronics store to see poor Security practice. I snapped this photo below yesterday:


What’s wrong here, you ask? Well ,a couple of things. This particular Cisco phone was end of life in 2009, see link for verification:

http://www.cisco.com/c/en/us/products/collateral/collaboration-endpoints/unified-ip-phone-7940g/end_of_life_notice_c51-526372.html

End of life in 2009 means that Cisco is no longer writing security patches or software updates for the phone since that time.  And because Cisco IP phones basically act as a 2 port switch, what this is at its most basic is a 11/12 year old network switch, one little blue ethernet cord going into the Electronic retailer’s internal network; and the other little blue cord going to the POS system.

Second, Cisco phone set ups like this [ where the phone is acting as a switch for another network endpoint/host ]  are actually poor choices for customer facing kiosks and counters due to the fact that the design exposes a physical ethernet port to the public and is open to tampering. Here, this poor choice has been compounded by the fact that this particular Electronics retailer has not engaged in a badly needed hardware refresh for 9 years, making the phone itself a target for a number of known public hacks.

To mitigate this; any Cisco phone that acts as a customer phone / public / lobby phone should not have another endpoint connected; and furthermore; the 2nd network port can and should be disabled on that phone. ( Cisco does allow the 2nd port to be disabled )

In this case, where the phones are obviously for both store employee AND customer access; any way to physically wall-off or protect those network ports from tampering would assist in mitigation. ( I’ve heard of people actually supergluing the ethernet cables in the phone ). Truth is, the over-all design of using of the 2nd ethernet port to connect to the POS system in area that is clearly accessible to the public was a huge disservice to this particular retailer by the vendor/company that sold them that design, regardless of how old the IP phone is. That second port on the phone is really meant to be used inside secure office buildings, at cubicles, in employer offices with their own physical controls in place..e.g.,  areas not accessible to the public.

These 7940 and 7960 phones were all over the store, connected to store POS systems, not just at the counter where I snapped the photo. Although theft customer credit card data does not really seem to raise eyebrows these days; so I will not go into that so much…,  however, I will touch on the point that these systems are used by employees to access all store inventory; [ e.g., at a fundamental level, modify a database and attributes of objects in a database ]. Anyone with knowledge of these systems could easily gain access by placing a remote tap on the ethernet port when no one is watching and with some work and a little reconnaissance, pretty much own the entire database. Worst case scenario, yes, but that is what I see in the picture above.

Until next time!

AWS Workshop Automating Security in the Cloud in Denver: Day 2

Day 2 – Another gorgeous day in Denver, looking west at the Rockies from main event room on the 11 floor of the Marriott. A great day to learn about Cloud Security in AWS.

Day kicked off with presentation from  Cloud Compliance Vendor Telos, their Solution Xacta

Again, forgive brevity, partial sentences; these are literally, my notes transcribed from as fast as I could write in my notebook.

Good stuff started when Nathan McGuirt, Public Sector Manager, AWS, took the stage for AWS Network Security; and this is where my notes begin:

Nathan opens: Almost everything is an API and can be automated. APIs automated network discovery ( things network tools used to do )

AWS Network configs are centralized. VPC is NOT a vlan or a VRF! Pressures to migrate to large, shared networks are all but gone.

Micro-segmentation allows for smaller networks; are more granular in function, reduce Application risk via isolation, reduce (effect of) deployment risks of new systems

Security Groups (SG) have become the most important piece of Security in AWS; Security groups are essentially stateful Firewalls that can be deployed per-instance / per VPC.

Two EC2 instances in the same Security Group don’t communicate by default. Security Groups are flexible and rules can be set to allow another Security Group as the source of the traffic.

The idea is Role Based Security group / permissions to communicate based on Role. Security Group traffic shows up in VPC flow logs in a netflow format.

VPC subnet ACL’s  ( access control lists ) are secondary layer of protection. ACLS are NOT stateful; and ephemeral ports have to be allowed on return traffic. ACLs are better suited for solving broad problems; communication between VPCs.

Route tables are now thought of as another Security mechanism for isolation and to control permissions; route tables create an extra set of protections.

Stop thinking about filtering at layer 3,4 and focus on filtering application:

  Ingress filtering tech: Reverse Proxy, (Elastic Load Balancer), Application Load balance with WAF, CloudFront CDN with WAF

  Egress filtering: Proxies / NAT

Moving outside VPC – VPC Peering – inside inside routes, Virtual Private Gateway to connect to on-prem legacy VPN hardware, AWS Direct Connect( AWS on prem fiber )

No transitive VPC peering, no A – > B -> C

 – some work-arounds, using proxies/ virtual router AMIs and routing to them instead of VPC ( does not scale large )

 – Nathan talked about a Management Services VPC that has all needed services in it( auth, monitoring, etc. )  and connecting all VPCS to that.

VPC design; plan early, plan carefully,  think about subnets and how you want to scale VPC before you implement it; since subnets cannot be removed from VPC once built.

Coud Formation and Change Control.  New AWS feature called ‘ Change Sets’ allow you to stage your Cloud formation builds and assign different permissions to developers, approversand implementer.

Talked about how powerful AWS config https://aws.amazon.com/config/   is; scans envrinments, object history, change logs, given snapshots of whole AWS environment .

Logging VPC flow logs / netflows for all things – goes to CloudWatch for every instance in the VPC. Rather than watching the noisy edge; Combine tight Security Groups with VPC flow logs within / and between other VPCs and look at what is getting denied – people trying to move sideways.  Watch for config changes that impair you ability to see config changes. ( e.g.., disabling of Cloud Trail ) – Automated remediation is possible thru APIs. Watch for dis-allowed configurations

Next up was Jim Jennis AWS Solutions Architect;

 Jim has a solid background – worked for US Coast Guard.

Jim: Automate Deploy Monitor in Security SDLC, build in Security and verify before deployment

Design>Package>Constrain>Deploy

Using Cloud Formation to describe AWS resources, JSON templates, key value pairs

How Security in the Cloud is different:

– Everything is an API Call

– Security Infrastructure should be Cloud Aware

– Security Features as APIs

– Automate Everything

Use Federation to leverage AD accounts of existing staff to get access to AWS and lock down tasks to their specific tasks.

[ Trust required with AD/ required ]

Preso Back to Tim and Nathan McGuirk together

 AWS cli matches the one in the API – live demo AWS command line (  learning AWS CLI would be a good way into learning API )

Nathan using AWS cli to:

  find EBS volumes that are unencrypted

  find Security Groups / with ssh allowed

  give a list of instances attached to groups found from last command

  ( Tim said this could be used to find IAM groups with resource of * )

Tim: this demo is the base to automation to find things that are not supposed to be there; run them multiple times a day

to locate things that are happening out of scope in our environment;

USE accounts and VPCs to limit last blast radius of compromise

 – Accounts are inherently isolated from one another provide isolation of both access and visibility for deployed resources

    – compromise of one account limits impact

Place Hardened templates in Service catalog; and employees can only select your hardened templates / AMIs when spinning up new instances

My Thoughts:  Security people that want to protect AWS need to learn to understand APIs, code  to fully leverage automation, and use the amazon cli and benefit. Queries are powerful can be leveraged to tell you very specific things in an environment; I’ve seen things today that make the old network tools I used to use look like Ford model-T.  Security Professionals and Network need to adapt and become application-centric thinkers. AWS has incredible built in tools to ensure proper secure design from the creation of  hardened AMI / CF templates, to be able to lock down those the deployment of pre-approved instances. Security Groups and IAM make it to where you can get very granular on the permission level; as  Even applications can be granted permissions through access roles.

When thinking about everything I have done in the last 20 years as a Network Engineer / Sys Admin / Security Engineer; what I experienced the last two days really reminded me the scene in ‘Dr Strange’ movie;  where Strange shows up at Kamar-Taj temple – and begins his true teaching. The ancient one says tell him that what he had learned before got him far in life; but he needed to embrace a whole new and completely different way of thinking.

For me, I am not as reluctant as Strange was, but the idea of embracing the concept that an entire infrastructure is just CODE; servers, routers, firewalls are now just coded objects and APIs are the “neurons” that make it all work is just as impactful.

I am grateful to AWS Engineers and the Vendors for coming to Denver!

THANK YOU!!!

AWS Workshop Automating Security in the Cloud in Denver: Day 1

When I found out AWS was going to be doing a two day Security class in my home town at the end of April, I took no chances and simply requested two days PTO so I could go to this event.

Day one CORE content was taught by Tim Sandage , Senior Security Partner Strategist, AWS. Tim knows his world very well and relayed the material in a clear and concise way, keeping the audience engaged at all times.

There were a slew of Cloud Security Vendors there as well, my favorite vendor presentation was by Aaron Newman, CEO Cloudcheckr. Aaron is a very passionate and dynamic speaker – and the Cloudcheckr Product is gives solid, Security centric view of your AWS assets.

Some of the ideas presented below carry over from standard on-prem Infrastructure Security, such as Principle of Least Privilege; and some are unique to Cloud – CloudFormation Templates.

Below are my notes, transposed here from furiously fast handwritten catch during the event. Please excuse any brevity; partial sentences, etc.

Tim Sandage Notes: DevOps is Rising. Shortage of individuals to maintain Security.  DevOps needs Security engrained into their process. Focus on Continious Risk Treatments (CRT). Need to bring Security Risk treatment into DevOps Lifecycle. Technology drives governance. Governance is shared between AWS and end customer. Security Automation is key to Successful Governance.  API is the common language through which Policy can be automated.

AWS is a SHARED Security model. Security by Design ( SbD ) is an AWS assurance approach that formalizes account design and automates control and auditing. Cloud Templates / Cloud formation is key.

Continuos monitoring is not enough, it does not catch bad guys; we need to create a workflow to help with tech governance.

Security is top priority at AWS. AWS has over 1800 accreditation/ certifications geared toward being a compliant provider for its customers.

When moving to the Cloud, consider the boundary extensions and how they can affect compliance. Before moving to the Cloud, first look at dependencies and traffic flows, to ensure you are not moving something into the Cloud that is not compliant.

[SHORT BREAK]  VENDOR PRESO Okta Did presentation on Securing Cloud. Tools for Cloud Compliance.

Back to Tim: Talked about a slide that showed Rationalizing Security Requirements / Standards; a framework for building internal polices based on Industry standards, Common Practices, Laws and Requirements. Talked on Share Responsibility Control Models. 1. Inherited Control, Hybrid Control, Shared Control and Customer Control.

Tim: Talked about Defining Data Protection; laid out Problem statements  1. Most orgs don’t have Data Classification scheme, 2 most orgs don’t have Authoritative Source for Data, most orgs don’t have Data LifeCycle Policy.

Topic change; Breaches: Breaches that involve actors obtaining Data from stolen laptop / or cell phone is because the world has moved to a mobile workforce and employer has not caught up with that; e.g., still storing data on employee devices and not in central location, protected by predefined automated policies. Nothing should be on local laptop or phone.

Make sure least privilege is always used – No human should ever have admin ( at least for very long); instead, user gets promoted to admin role; and then STS token expries and Role goes away.

Topic change Assets: Everything in AWS is an asset; instance IDs are unique , Security Group IDs are unique. Customers choose to track and decide how these assets are protected. ID’s by Amazon Resource Name; ( ARN )  Enable a Data Protection Architecture – a Methodology to lock down assets launched in AWS; using Cloud Formation templates with pre-defined polices. [ no EC2 launched in root, no S3 buckets with World read ]

CLOUD SECURITY TIP: CloudTrail needs to be enabled for all Regions; to mitigate against internal people spinning up instances in regions you don’t operate in; mitigate against external threats; expose any badness in your account.  Show Activity in a space in which you do not operate.

CLOUD SECURITY TIP: AWS root account – Delete Admin ROOT API Key. Never use root to do any task. No repudiation in logging  – Instead,  log in, create two admin accounts, turn on Cloud trail and MFA 

AWS Security Architecture Recipes using CIS Benchmarks – Tactical Control rules for hardening.  CIs Benchmarks have been “templatized” for CloudFormation  current on GITHUB  –

Use Tag in assets – mapping correlation for proof of documentation for which CIS template was used to meet compliance.

[ My thoughts on day one ] This is HUGE! Basically, AWS assets, EC2 instances, S3 buckets, RDS instances can all be deployed with these CIS based templates; ensuring that all assets have the same Security controls/permissions on them each time they are deployed; and then mapped back to an audit control with the naming in the TAG. Basically, hand the auditor a paper and say “bye bye now” This compliments custom, secure AMI ( Amazon Machine instances ) that are deployed from the original AMI that was built.

Great Security tips included and woven into each presentation. My notes did not capture all of this; but I did the best to get the main points – I have things in my notebook that may appear in later blogs – Also, Amazon does a fantastic job of documenting this

https://aws.amazon.com/documentation/

See you all tomorrow at Day2!

 

It’s time to use a VPN and DuckDuckGo.

The House has now allowed ISPs to sell your browsing habits  As this last remaining privacy barrier crumbles, the question I have had from a few friends and family members is; should I use a VPN; and if yes, which one?

The first part of that question; “should” you use a VPN? Well, I believe YES, for some things.

  1. Medical Conditions. If you are doing research regarding medical conditions for yourself or a family member, I would consider that private and information you don’t want bought and sold. Remember when Target knew a girl was pregnant before her own Family? You can avoid situations like that by using a combination of a private search Engine like DuckDuckGo and a VPN service. Furthermore, DuckDuckGo has an IOS / Android App that won’t save history and cookies.

2. Job Hunting. I would think job hunting would fall under the category of things I don’t want my ISP to know about; as we don’t know WHO will be buying your internet browsing history – and we don’t want that to be your current employer.

3. Financials. Although all Banking traffic in transit is encrypted with https these days; data about which banking institutions as well as frequency of logins may be marketable. Thats Nobody’s business.

4. Websites and data that could be misconstrued by future employers, lenders and Apartment leasing agents.  ( Because that who is buying your data) This is obviously more generic and requires some thought. It is now necessary to think about what kind of story your browsing habits tell about you; from multiple angles. Use VPN when you go somewhere that is nobody’s business.

Enter TunnelBear

Now, we move to the second part, which VPN service should I use? My personal favorite is TunnelBear. It’s super easy to use; and negligible performance hits –  Downloadable as an App for IOS or Android; client for MAC or Windows; or as a browser extension for Chrome. AES-256 Encryption and SUPER cheap. There is a FREE tier that you can use; that will give you 500 MB a month. That should be plenty as long as you are not Tunneling your Videos; or tunneling big file downloads.

When you launch Tunnel bear; and browse to any site on the internet, all your ISP will see if Traffic to Tunnel Bear’s VPN service. Sweet, yeah? Sell that, Comcast!

There are other VPN services similar to Tunnel bear out there; price and performance vary on each.

Also, let us not forget about Cookies! Although this new law sucks in its entirety; Stored Browser Cookies tell sites you visit more about where you’ve been than any other mechanism. Although most sites require you store cookies now-a-days, it is something to be aware of. DuckDuckGo ‘s Mobile App browsers have cookies off by default. If you truly want to stay private, you must: Not use Google, or Google Chrome to search. Not visit any site with a browser with cached cookies of your browsing history. ( e.g., clear your cookies often or DuckDuckGo Mobile ). Use a VPN service.

A few last notes; personally, I don’t think there is a need to tunnel Netflix, Hulu and other video sites – some VPNs will introduce performance issues on those services; and frankly, I don’t see the reason . . .yet. Social Media sites are also another big can of worms that I don’t really ant to open here; except to say they are ALREADY selling your info. What the ISPs would get through this new law is how frequent you visit Facebook, and others . . . so your call. Really, the list above is a good place to start your Privacy.

Safe Safe!

Disclaimer: I am not paid or employed by TunnelBear. I just like their service. What is offered above is my unpaid – unbiased opinion only.

 

Social Security Card was never meant to be used as an ID card.

This is a solid 7 minute video explaining the inception and original INTENDED use of the United States Social Security card; and how it was NEVER originally intended to be used as an identification card.  What?? REALLY???

The other “shocker” is that there is no Security built into the card regarding the number schema – pretty amazing [SAD] when you think about how much this number is used for major financial transactions, ( outside of Social Security itself). Yes, the number is fairly guessable if you know state of birth, year of birth. Because of this; and the plethora of Security breaches that have taken place over the years, I assume compromise when it comes to my own number; ( e.g., the bad guys have it somewhere – waiting to be used, along with millions of others’ SS numbers)

So then – The BEST thing you and I can do use utilize some kind of credit monitoring service; or continuously keep your credit report in FRAUD alert mode, making it hard for other people to open accounts in your name. FRAUD alerts must be set every 90 days; or if you are active military, one year.  FRAUD alerts fall under the ” I don’t have to out run the bear, I just need to out run you “; as fraud alerts don’t guarantee your identity can’t be stolen; it just makes it harder to steal yours than the next guys’  🙂

If you set an alert with one of the Bureaus – they notify the other ones. Here is the Equifax link to set an initial 90 day alert.

On Credit Monitoring – I have sort of a love-hate relationship with credit monitoring services; because I believe credit monitoring is something that the three(four) major credit bureaus should BE DOING ANYWAY as it clearly falls under their due diligence as record keepers of such critical information. BUT they don’t . . . and we must pay for the credit monitoring service from one of the major bureaus or a third party. That’s kind of wrong, but necessary in my eyes.

Stay safe!

 

 

 

Why I am pursuing Amazon Certified Solutions Architect Certification

It’s time to switch up here a bit . . . While we Security Professionals have been shunning all things Cloud and jumping up and down, waving our hands, desperately warning our employers not to move data there; the world has gone on without us and Enterprises are aggressively moving data and services to the Cloud. This is driven largely by massive cost saves associated with Cloud Services in general. Software is eating the world. And Cloud is making that possible. Businesses are moving in that direction. The time has come (or it has been here a while), for InfoSec as a whole to embrace this practice of having data, services and workloads in the Cloud and do the best we can to secure the Confidentiality, Integrity and Availability of those services and data in the Cloud.

Of all companies in this space, Amazon is kicking serious ass in being the world leader in Cloud Platforms. Amazon AWS is the fastest growing multi-billion dollar Enterprise IT company in the World. 14 Billion to be exact. Many heavy hitters are putting dollars and infrastructure in the the Cloud. Amazon is putting tremendous resources in the cloud in their own right and have bet the farm on it; they are innovating at a furious pace as their own shopping services rely on their own AWS Cloud. AWS is experiencing 47% year over year growth!

All of my IT personas: Information Security, Network, Linux guy, Server guy, I believe that an understanding of Cloud / Securing Cloud / Writing Code for the Cloud is the best way I can serve others and provide for my Family in the many years to come. The Economics driving this push to Cloud, [ ultra-cheap compute power, no longer paying to build and maintain data centers, paying for services only when your customers use them are just a few of the drivers ]  simply cannot be ignored!  That is why  I am pursuing  the

Amazon Certified Architect first; followed by the AWS Security Specialization Exam then; the Professional Architect Exam. Presently, I am doing a ton of hands on labs after hours in the AWS Free tier; Enrolled in a great course by A Cloud Guru Learning. The course is taught by Ryan Kroonenberg and is SUPERB! I’m 70% through the first run.

I believe the Amazon Cert Program is solid and will compliment my background and skill set and increase my value to employers. So, yeah, I am excited!  I am learning at a pace and depth I never thought possible! I think that is because I truly enjoy the AWS platform; the multitude of services offered and how to interface and build with them!

disclaimer: I do not work for Amazon, nor I am paid by Amazon.

Stay Safe!

 

UPDATE:  At the AWS ‘Automating Security in the Cloud’ event I attended ( a few weeks after this posting ); during the audience polling; most event attendants were there because their employers are moving data and predictable workloads into AWS. Amazon and partners are making it easier to address regulatory compliance; FedRAMP, 800-53.  All this means is that the FRICTION is  that used to exist between getting on-prem data / workloads into the cloud is eroding. That’s not an accident.

Cisco’s CORE CCNP Program is badly outdated

Hi friends – thought I’d change things up a bit. Recently, I passed Cisco CCNP SWITCH 300-115. It was necessary to renew my CCNP.  I have been a CCNP since 2002,  with only a small two year gap where I let my cert expire; and had to re take them all again. That was so painful, I vowed never to let it expire again.  I believe in the value of CCNP, (otherwise I would not plunked down the $300, sat the test and renewed it). Although now-a-days it seems like CCNP is more of a way to get interviews quickly, rather than an accurate reflection of what true Network Engineering is today; or more importantly, what it will be tomorrow.

The CCNP SWITCH exam topics  are really a lot of the same fundamentals I studied 15 years ago when I first got my CCNP, excepting newer versions of protocols, (VTP3, ) and a few new ones (LLDP).  The exam outline includes things like Cisco Stackwise Technology – technologies for branch offices and IDF closets only. Looking back at it, I feel like how much of the material was presented for the CCNP switching exam was really more for a “Cisco Certified Branch Office Engineer.”

In Cisco’s defense, there is a CCNP Data Center Certification. This is a Cisco Nexus  centric program which focuses on many Cisco-centric technologies like  vPC, OTV, Fabric Path. Although the Data Center Program is a SOLID path for learning Nexus technologies and working in Nexus populated data centers; this path is geared more toward a specialist track; (e.g., if you are already a CCNP, now four separate tests for another CCNP???) and thus Data Center technology is not part of the CORE Routing and Switching (RS) CCNP Program – yet the RS CCNP is still the measuring stick for Network Engineer knowledge what it comes to locating the best candidates.

Why do I think the CCNP Program is badly outdated?   I think those in Networking pretty much by now know that the Future of Networks is Programmable, Software-Defined Networks. Software Defined Networking (SDN) is here today and it will rule tomorrow’s world – and thus – what we need to be teaching students today. Proof is everywhere.

Facebook has using their own proprietary bare-steel programmable network hardware for sometime,  Companies like Cumulus Networks and Arista Networks are already heavily invested in this route to sell to Enterprise customers to replace traditional gear. Cisco has delved deeply into Software Defined Networking itself through its ACI platform.  SDN is quickly happening to Network hardware. This topic SDN replacing traditional network hardware is really its own blog post, and I bring it up here to substantiate my claim that Cisco’s CORE CCNP Program is outdated. So why is SDN NOT on CCNP RS? Good Question.

Answer is: It could be. Here is the saving grace – Cisco does have the NPDESI exam which tests on the new technology that CCNP candidates should be learning, such as using Python for Programming, APIs, Securing the Network controller -Sadly though, NPDESI at present, is a weird specialist stand-alone exam named with an acronym NO ONE will recognize and not very marketable.

So you ask, how should CCNP be re-built? Well, for starters – make it four tests again (like it used be ). Change the Program name from CCNP Routing and Switching to ‘CCNP CORE’  & This is what Cisco’s CORE should look like:

  1. Make Routing and Switching Fundamentals into one exam, ( move branch and older technologies to CCNA or remove them all together). 60-70 Qs 120 minutes
  2. Make the SDN NPDESI  a required exam for CCNP core. ( and please rename it, to say, ‘Programming Cisco Networks – PROG’, please!! )
  3. Make one CCNP CORE exam for all Cisco Nexus Data Center Technologies (kill the separate Data Center CCNP track ) 60-70 Qs 120 minutes
  4. And finally a new TSHOOT exam, covering troubleshooting of all the topics on the other three. 

This new proposed exam format would also continue to move away from the MCQ format and move toward simulators or even GNS configurations performed during an exam sitting; that could be turned in and graded later. This new exam format would be purposefully harder, longer and more challenging. Students who complete the new training would be better Network Engineers and more equipped for the Networks of tomorrow.

CCNP RS is arguably, Cisco’s most popular certification, second to CCNA. A big change in the CCNP Program would also benefit Cisco an as organization; as Engineers who support their products would be learning-focused in the areas/products in which the company is trying to grow. It makes so much sense. In some places it makes sense to have a separate certification track; like Security, but it does not make sense to have two CCNP tracks – one obviously geared for Data Center and one geared for (not Data Centers?).

so.. yeah – I think those changes would really make the CORE CCNP a solid, reputable program again that would accurately reflect what candidates need today and tomorrow to be strong Network Engineers. CCNP could really mean something again.

UPDATE:

About a week and a half after I published my thoughts on the CCNP, Business Insider Published this article about ATT’s own Open Source SDN software, ECOMP. Other than a very bad stock photo used; the article is good. One very good point made that I did not think of when I wrote the blog above is that Cisco’s own SDN is only available on the Nexus 9000 at this time. Not exactly a pervasive technology inside Cisco; which makes sense why they have not placed it on their CCNP.

Meanwhile, Cisco’s PRO-SDN competitors are pushing FREE training:

http://www.bigswitch.com/press-releases/2015/02/19/big-switchs-bsn-labs-now-available-for-free-hands-on-experience-with-sdn

Driving it home:

Federal Agencies are now discussing how to implement SDN:

http://www.fedtechmagazine.com/article/2017/04/feds-plan-new-networking-world-sdn

Other people are questioning the same thing:

http://searchsdn.techtarget.com/news/2240232201/Network-pros-need-SDN-training-not-CCIE-status